DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Data Engineering Topics

article thumbnail
Wrapping Begin/End Asynchronous API into C#5 Tasks
Microsoft offered programmers several different ways of dealing with the asynchronous programming since .NET 1.0. The first model was Asynchronous programming model or APM for short. The pattern is implemented with two methods named BeginOperation and EndOperation. .NET 4 introduced new pattern – Task Asynchronous Pattern and with the introduction of .NET 4.5, Microsoft added language support for language integrated asynchronous coding style. You can check the MSDN for more samples and information. I will assume that you are familiar with it and have written code using it. You can wrap existing APM pattern into TPL pattern using the Task.Factory.FromAsync methods. For example: public static Task> ExecuteAsync(this DataServiceQuery query, object state) { return Task.Factory.FromAsync>(query.BeginExecute, query.EndExecute, state); } It is easy to wrap most of the asynchronous functions this way, but some cannot be since the wrapper functions assume that the last two parameters to the BeginOperation are AsyncCallback and object, and there are some versions of asynchronous operations that have different specifications. Examples: 1. Extra parameters after the object state parameter: IAsyncResult DataServiceContext.BeginExecuteBatch( AsyncCallback callback, object state, params DataServiceRequest[] queries); 2. Missing the expected object state parameter and different return type: ICancelableAsyncResult BeginQuery(AsyncCallback callBack); WorkItemCollection EndQuery(ICancelableAsyncResult car); Short solution for the first example The short and elegant way for wrapping the first example is to provide the following wrapper: public static Task ExecuteBatchAsync(this DataServiceContext context, object state, params DataServiceRequest[] queries) { if (context == null) throw new ArgumentNullException("context"); return Task.Factory.FromAsync( context.BeginExecuteBatch(null, state, queries), context.EndExecuteBatch); } We simply call the Begin method ourselves and then wrap it using an another overload for FromAsync function. The longer way However, we can fully wrap it ourselves by simulating what the FromAsync wrapper does. The complete code is listed below. public static Task ExecuteBatchAsync(this DataServiceContext context, object state, params DataServiceRequest[] queries) { // this will be our sentry that will know when our async operation is completed var tcs = new TaskCompletionSource(); try { context.BeginExecuteBatch((iar) => { try { var result = context.EndExecuteBatch(iar as ICancelableAsyncResult); tcs.TrySetResult(result); } catch (OperationCanceledException ex) { // if the inner operation was canceled, this task is cancelled too tcs.TrySetCanceled(); } catch (Exception ex) { // general exception has been set bool flag = tcs.TrySetException(ex); if (flag && ex as ThreadAbortException != null) { tcs.Task.m_contingentProperties.m_exceptionsHolder.MarkAsHandled(false); } } }, state, queries); } catch { tcs.TrySetResult(default(DataServiceResponse)); // propagate exceptions to the outside throw; } return tcs.Task; } Besides educational benefits, writing the full wrapper code allows us to add cancellation, logging and diagnostic information. Once we understand how to wrap APM pattern, We can now tackle the second problem easily. Handling the BeginQuery/EndQuery We will first create our own wrapper function in the spirit of the above code with the notable difference that we use the ICancelableAsyncResult interface instead of the IAsyncResult. public static class TaskEx { public static Task FromAsync(Func beginMethod, Func endMethod) { if (beginMethod == null) throw new ArgumentNullException("beginMethod"); if (endMethod == null) throw new ArgumentNullException("endMethod"); var tcs = new TaskCompletionSource(); try { beginMethod((iar) => { try { var result = endMethod(iar as ICancelableAsyncResult); tcs.TrySetResult(result); } catch (OperationCanceledException ex) { tcs.TrySetCanceled(); } catch (Exception ex) { bool flag = tcs.TrySetException(ex); if (flag && ex as ThreadAbortException != null) { tcs.Task.m_contingentProperties.m_exceptionsHolder.MarkAsHandled(false); } } }); } catch { tcs.TrySetResult(default(TResult)); throw; } return tcs.Task; } } The code is pretty self-explanatory and we can go ahead with the wrapping. There are four different operations that are exposed both in synchronous and asynchronous version: Query, LinkQuery, CountOnlyQuery and RegularQuery. The extension methods are short since we have already created our generic wrapper above: public static Task RunQueryAsync(this Query query) { return TaskEx.FromAsync(query.BeginQuery, query.EndQuery); } public static Task RunLinkQueryAsync(this Query query) { return TaskEx.FromAsync(query.BeginLinkQuery, query.EndLinkQuery); } public static Task RunCountOnlyQueryAsync(this Query query) { return TaskEx.FromAsync(query.BeginCountOnlyQuery, query.EndCountOnlyQuery); } public static Task RunRegularQueryAsync(this Query query) { return TaskEx.FromAsync(query.BeginRegularQuery, query.EndRegularQuery); } That is it for today, you can write your own handy extensions easily for APM functions out there.
June 21, 2012
by Toni Petrina
· 10,347 Views
article thumbnail
Top 10 Causes of Java EE Enterprise Performance Problems
Performance problems are one of the biggest challenges to expect when designing and implementing Java EE related technologies.
June 20, 2012
by Pierre - Hugues Charbonneau
· 273,347 Views · 20 Likes
article thumbnail
Fast Index Creation with InnoDB
Innodb can indexes built by sort since Innodb Plugin for MySQL 5.1 which is a lot faster than building them through insertion, especially for tables much larger than memory and large uncorrelated indexes you might be looking at 10x difference or more. Yet for some reason Innodb team has chosen to use very small (just 1MB) and hard coded buffer for this operation, which means almost any such index build operation has to use excessive sort merge passes significantly slowing down index built process. Mark Callaghan and Facebook Team has fixed this in their tree back in early 2011 adding innodb_merge_sort_block_size variable and I was thinking this small patch will be merged to MySQL 5.5 promptly, yet it has not happen to date. Here is example of gains you can expect (courtesy of Alexey Kopytov), using 1Mil rows Sysbench table. Buffer Length | alter table sbtest add key(c) 1MB 34 sec 8MB 26 sec 100MB 21 sec 128MB 17 sec REBUILD 37 sec REBUILD in this table is using “fast_index_creation=0″ which allows to disable fast index creation in Percona Server and force complete table to be rebuilt instead. Looking at this data we can see even for such small table there is possible to improve index creation time 2x by using large buffer. Also we can see we can substantially improve performance even increasing it from 1MB to 8MB, which might be sensible as default as even small systems should be able to allocate 8MB to do alter table. You may be wondering why in this case table rebuild is so close in performance to building index by sort with small buffer – this comes from building index on long character field with very short length, Innodb would use fixed size records for sort space which results in a lot more work done than you would otherwise need. Having some optimization to better deal with this case also would be nice. The table also was fitting in buffer pool completely in this case which means table rebuild could have done fast too. Results are from Percona Server 5.5.24
June 19, 2012
by Peter Zaitsev
· 4,321 Views
article thumbnail
How to Identify and Resolve Hibernate N+1 SELECT's Problems
Let’s assume that you’re writing code that’d track the price of mobile phones. Now, let’s say you have a collection of objects representing different Mobile phone vendors (MobileVendor), and each vendor has a collection of objects representing the PhoneModels they offer. To put it simple, there’s exists a one-to-many relationship between MobileVendor:PhoneModel. MobileVendor Class Class MobileVendor{ long vendor_id; PhoneModel[] phoneModels; ... } Okay, so you want to print out all the details of phone models. A naive O/R implementation would SELECT all mobile vendors and then do N additional SELECTs for getting the information of PhoneModel for each vendor. -- Get all Mobile Vendors SELECT * FROM MobileVendor; -- For each MobileVendor, get PhoneModel details SELECT * FROM PhoneModel WHERE MobileVendor.vendorId=? As you see, the N+1 problem can happen if the first query populates the primary object and the second query populates all the child objects for each of the unique primary objects returned. Resolve N+1 SELECTs problem (i) HQL fetch join "from MobileVendor mobileVendor join fetch mobileVendor.phoneModel PhoneModels" Corresponding SQL would be (assuming tables as follows: t_mobile_vendor for MobileVendor and t_phone_model for PhoneModel) SELECT * FROM t_mobile_vendor vendor LEFT OUTER JOIN t_phone_model model ON model.vendor_id=vendor.vendor_id (ii) Criteria query Criteria criteria = session.createCriteria(MobileVendor.class); criteria.setFetchMode("phoneModels", FetchMode.EAGER); In both cases, our query returns a list of MobileVendor objects with the phoneModels initialized. Only one query needs to be run to return all the PhoneModel and MobileVendor information required.
June 13, 2012
by Singaram Subramanian
· 201,120 Views · 13 Likes
article thumbnail
Every Programmer Should Know These Latency Numbers
This is interesting stuff; Jonas Bonér organized some general some latency data by Peter Norvig as a Gist, and others expanded on it. What's interesting is how, scaling time up by a billion, converts a CPU instruction cycle into approximately one heartbeat, and yields a disk seek time of "a semester in university". ### Latency numbers every programmer should know L1 cache reference ......................... 0.5 ns Branch mispredict ............................ 5 ns L2 cache reference ........................... 7 ns Mutex lock/unlock ........................... 25 ns Main memory reference ...................... 100 ns Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs SSD random read ........................ 150,000 ns = 150 µs Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs Round trip within same datacenter ...... 500,000 ns = 0.5 ms Read 1 MB sequentially from SSD* ..... 1,000,000 ns = 1 ms Disk seek ........................... 10,000,000 ns = 10 ms Read 1 MB sequentially from disk .... 20,000,000 ns = 20 ms Send packet CA->Netherlands->CA .... 150,000,000 ns = 150 ms Assuming ~1GB/sec SSD ![Visual representation of latencies](http://i.imgur.com/k0t1e.png) Visual chart provided by [ayshen](https://gist.github.com/ayshen) Data by [Jeff Dean](http://research.google.com/people/jeff/) Originally by [Peter Norvig](http://norvig.com/21-days.html#answers) Lets multiply all these durations by a billion: Magnitudes: ### Minute: L1 cache reference 0.5 s One heart beat (0.5 s) Branch mispredict 5 s Yawn L2 cache reference 7 s Long yawn Mutex lock/unlock 25 s Making a coffee ### Hour: Main memory reference 100 s Brushing your teeth Compress 1K bytes with Zippy 50 min One episode of a TV show (including ad breaks) ### Day: Send 2K bytes over 1 Gbps network 5.5 hr From lunch to end of work day ### Week SSD random read 1.7 days A normal weekend Read 1 MB sequentially from memory 2.9 days A long weekend Round trip within same datacenter 5.8 days A medium vacation Read 1 MB sequentially from SSD 11.6 days Waiting for almost 2 weeks for a delivery ### Year Disk seek 16.5 weeks A semester in university Read 1 MB sequentially from disk 7.8 months Almost producing a new human being The above 2 together 1 year ### Decade Send packet CA->Netherlands->CA 4.8 years Average time it takes to complete a bachelor's degree
June 12, 2012
by Howard Lewis Ship
· 136,673 Views
article thumbnail
How to Get the JPQL/SQL String From a CriteriaQuery in JPA ?
I.T. is full of complex things that should (and sometimes could) be simple. Getting the JQPL/SQL String representation for a JPA 2.0 CriteriaQuery is one of them. By now you all know the JPA 2.0 Criteria API : a type safe way to write a JQPL query. This API is clever in the way that you don’t use Strings to build your query, but is quite verbose… and sometimes you get lost in dozens of lines of Java code, just to write a simple query. You get lost in your CriteriaQuery, you don’t know why your query doesn’t work, and you would love to debug it. But how do you debug it ? Well, one way would be by just displaying the JPQL and/or SQL representation. Simple, isn’t it ? Yes, but JPA 2.0 javax.persistence.Query doesn’t have an API to do this. You then need to rely on the implementation… meaning, the code is different if you use EclipseLink, Hibernate or OpenJPA. The CriteriaQuery we want to debug Let’s say you have a simple Book entity and you want to retrieve all the books sorted by their id. Something like SELECT b FROM Book b ORDER BY b.id DESC. How would you write this with the CriteriaQuery ? Well, something like these 5 lines of Java code : CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery q = cb.createQuery(Book.class); Root b = q.from(Book.class); q.select(b).orderBy(cb.desc(b.get("id"))); TypedQuery findAllBooks = em.createQuery(q); So imagine when you have more complex ones. Sometimes, you just get lost, it gets buggy and you would appreciate to have the JPQL and/or SQL String representation to find out what’s happening. You could then even unit test it. Getting the JPQL/SQL String Representations for a Criteria Query So let’s use an API to get the JPQL/SQL String representations of a CriteriaQuery (to be more precise, the TypedQuery created from a CriteriaQuery). The bad news is that there is no standard JPA 2.0 API to do this. You need to use the implementation API hoping the implementation allows it (thank god that’s (nearly) the case for the 3 main JPA ORM frameworks). The good news is that the Query interface (and therefore TypedQuery) has an unwrap method. This method returns the provider’s query API implementation. Let’s see how you can use it with EclipseLink, Hibernate and OpenJPA. EclipseLink EclipseLink‘s Query representation is the org.eclipse.persistence.jpa.JpaQuery interface and the org.eclipse.persistence.internal.jpa.EJBQueryImpl implementation. This interface gives you the wrapped native query (org.eclipse.persistence.queries.DatabaseQuery) with two very handy methods : getJPQLString() and getSQLString(). Unfortunatelly the getJPQLString() method will not translate a CriteriaQuery into JPQL, it only works for queries originally written in JPQL (dynamic or named query). The getSQLString() method relies on the query being “prepared”, meaning you have to run the query once before getting the SQL String representation. findAllBooks.unwrap(JpaQuery.class).getDatabaseQuery().getJPQLString(); // doesn't work for CriteriaQuery findAllBooks.unwrap(JpaQuery.class).getDatabaseQuery().getSQLString(); Hibernate Hibernate‘s Query representation is org.hibernate.Query. This interface has several implementations and the very useful method that returns the SQL query string : getQueryString(). I couldn’t find a method that returns the JPQL representation, if I’ve missed something, please let me know. findAllBooks.unwrap(org.hibernate.Query.class).getQueryString() OpenJPA OpenJPA‘s Query representation is org.apache.openjpa.persistence.QueryImpl and also has a getQueryString() method that returns the SQL (not the JPQL). It delegates the call to the internal org.apache.openjpa.kernel.Query interface. I couldn’t find a method that returns the JPQL representation, if I’ve missed something, please let me know. findAllBooks.unwrap(org.apache.openjpa.persistence.QueryImpl.class).getQueryString() Unit testing Once you get your SQL String, why not unit test it ? Hey, but I don’t want to test my ORM, why would I do that ? Well, it happens that I’ve discovered a but in the new releases of OpenJPA by unit testing a query… so, there is a use case for that. Anyway, this is how you could do it : assertEquals("SELECT b FROM Book b ORDER BY b.id DESC", findAllBooksCriteriaQuery.unwrap(org.apache.openjpa.persistence.QueryImpl.class).getQueryString()); Conclusion As you can see, it’s not that simple to get a String representation for a TypedQuery. Here is a digest of the three main ORMs : ORM Framework Query implementation How to get the JPQL String How to get the SPQL String EclipseLink JpaQuery getDatabaseQuery().getJPQLString()* getDatabaseQuery().getSQLString()** Hibernate Query N/A getQueryString() OpenJPA QueryImpl getQueryString() N/A (*) Only possible on a dynamic or named query. Not possible on a CriteriaQuery (**) You need to execute the query first, if not, the value is null To illustrate all that I’ve written simple test cases using EclipseLink, Hibernate and OpenJPA that you can download from GitHub. Give it a try and let me know. And what about having an API in JPA 2.1 ? For a developers’ point of view it would be great to have two methods in the javax.persistence.Query (and therefore javax.persistence.TypedQuery) interface that would be able to easily return the JPQL and SQL String representations, e.g : Query.getJPQLString() and Query.getSQLString(). Hey, that would be the perfect time to have it in JPA 2.1 that will be shipped in less than a year. Now, as an implementer, this might be tricky to do, I would love to ear your point of view on this. Anyway, I’m going to post an email to the JPA 2.1 Expert Group… just in case we can have this in the next version of JPA ;o) References http://efreedom.com/Question/1-6412774/Get-SQL-String-JPQLQuery http://old.nabble.com/Cannot-get-the-JPQL—SQL-String-of-a-CriteriaQuery-td33882629.html http://paddyweblog.blogspot.fr/2010/04/some-examples-of-criteria-api-jpa-20.html http://www.altuure.com/2010/09/23/jpa-criteria-api-by-samples-part-i/ http://www.altuure.com/2010/09/23/jpa-criteria-api-by-samples-%E2%80%93-part-ii/ http://www.jumpingbean.co.za/blogs/jpa2-criteria-api http://wiki.eclipse.org/EclipseLink/FAQ/JPA#How_to_get_the_SQL_for_a_Query.3F
June 5, 2012
by Antonio Goncalves
· 60,154 Views · 1 Like
article thumbnail
Database unit testing with DBUnit, Spring and TestNG
I really like Spring, so I tend to use its features to the fullest. However, in some dark corners of its philosophy, I tend to disagree with some of its assumptions. One such assumption is the way database testing should work. In this article, I will explain how to configure your projects to make Spring Test and DBUnit play nice together in a multi-developers environment. Context My basic need is to be able to test some complex queries: before integration tests, I've to validate those queries get me the right results. These are not unit tests per se but let's assilimate them as such. In order to achieve this, I use since a while a framework named DBUnit. Although not maintained since late 2010, I haven't found yet a replacement (be my guest for proposals). I also have some constraints: I want to use TestNG for all my test classes, so that new developers wouldn't think about which test framework to use I want to be able to use Spring Test, so that I can inject my test dependencies directly into the test class I want to be able to see for myself the database state at the end of any of my test, so that if something goes wrong, I can execute my own queries to discover why I want every developer to have its own isolated database instance/schema Considering the last point, our organization let us benefit from a single Oracle schema per developer for those "unit-tests". Basic set up Spring provides the AbstractTestNGSpringContextTests class out-of-the-box. In turn, this means we can apply TestNG annotations as well as @Autowired on children classes. It also means we have access to the underlying applicationContext, but I prefer not to (and don't need to in any case). The structure of such a test would look like this: @ContextConfiguration(location = "classpath:persistence-beans.xml") public class MyDaoTest extends AbstractTestNGSpringContextTests { @Autowired private MyDao myDao; @Test public void whenXYZThenTUV() { ... } } Readers familiar with Spring and TestNG shouldn't be surprised here. Bringing in DBunit DbUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs. [...] DbUnit has the ability to export and import your database data to and from XML datasets. Since version 2.0, DbUnit can also work with very large datasets when used in streaming mode. DbUnit can also help you to verify that your database data match an expected set of values. DBunit being a JUnit extension, it's expected to extend the provided parent class org.dbunit.DBTestCase. In my context, I have to redefine some setup and teardown operation to use Spring inheritance hierarchy. Luckily, DBUnit developers thought about that and offer relevant documentation. Among the different strategies available, my tastes tend toward the CLEAN_INSERT and NONE operations respectively on setup and teardown. This way, I can check the database state directly if my test fails. This updates my test class like so: @ContextConfiguration(locations = {"classpath:persistence-beans.xml", "classpath:test-beans.xml"}) public class MyDaoTest extends AbstractTestNGSpringContextTests { @Autowired private MyDao myDao; @Autowired private IDatabaseTester databaseTester; @BeforeMethod protected void setUp() throws Exception { // Get the XML and set it on the databaseTester // Optional: get the DTD and set it on the databaseTester databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT); databaseTester.setTearDownOperation(DatabaseOperation.NONE); databaseTester.onSetup(); } @Test public void whenXYZThenTUV() { ... } } Per-user configuration with Spring Of course, we need to have a specific Spring configuration file to inject the databaseTester. As an example, here is one: However, there's more than meets the eye. Notice the databaseTester has to be fed a datasource. Since a requirement is to have a database per developer, there are basically two options: either use a in-memory database or use the same database as in production and provide one such database schema per developer. I tend toward the latter solution (when possible) since it tends to decrease differences between the testing environment and the production environment. Thus, in order for each developer to use its own schema, I use Spring's ability to replace Java system properties at runtime: each developer is characterized by a different user.name. Then, I configure a PlaceholderConfigurer that looks for {user.name}.database.properties file, that will look like so: db.username=myusername1 db.password=mypassword1 db.schema=myschema1 This let me achieve my goal of each developer using its own instance of Oracle. If you want to use this strategy, do not forget to provide a specific database.properties for the Continuous Integration server. Huh oh? Finally, the whole testing chain is configured up to the database tier. Yet, when the previous test is run, everything is fine (or not), but when checking the database, it looks untouched. Strangely enough, if you did load some XML dataset and assert it during the test, it does behaves accordingly: this bears all symptoms of a transaction issue. In fact, when you closely look at Spring's documentation, everything becomes clear. Spring's vision is that the database should be left untouched by running tests, in complete contradiction to DBUnit's. It's achieved by simply rollbacking all changes at the end of the test by default. In order to change this behavior, the only thing to do is annotate the test class with @TransactionConfiguration(defaultRollback=false). Note this doesn't prevent us from specifying specific methods that shouldn't affect the database state on a case-by-case basis with the @Rollback annotation. The test class becomes: @ContextConfiguration(locations = {classpath:persistence-beans.xml", "classpath:test-beans.xml"}) @TransactionConfiguration(defaultRollback=false) public class MyDaoTest extends AbstractTestNGSpringContextTests { @Autowired private MyDao myDao; @Autowired private IDatabaseTester databaseTester; @BeforeMethod protected void setUp() throws Exception { // Get the XML and set it on the databaseTester // Optional: get the DTD and set it on the databaseTester databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT); databaseTester.setTearDownOperation(DatabaseOperation.NONE); databaseTester.onSetup(); } @Test public void whenXYZThenTUV() { ... } } Conclusion Though Spring and DBUnit views on database testing are opposed, Spring's configuration versatility let us make it fit our needs (and benefits from DI). Of course, other improvements are possible: pushing up common code in a parent test class, etc. To go further: Spring Test documentation DBUnit site Database data verification Database testing best practices Generating DTD from your database schema
June 4, 2012
by Nicolas Fränkel DZone Core CORE
· 59,344 Views
article thumbnail
Spring Integration Gateways - Null Handling & Timeouts
Spring Integration (SI) Gateways Spring Integration Gateways () provide a semantically rich interface to message sub-systems. Gateways are specified using namespace constructs, these reference a specific Java interface () that is backed by an object dynamically implemented at run-time by the Spring Integration framework. Furthermore, these Java interfaces can, if you so wish, be defined entirely independent of any Spring artefacts - that's both code and configuration. One of the primary advantages of using the SI gateway as an interface to message sub-systems is that it's possible to automatically adopt the benefit of rich, default and customisable, gateway configuration. One such configuration attribute deserves further scrutiny and discussion primarily because it's easy to misunderstand and misconfigure around - default-reply-timeout. Primary Motivator for Gateway Analysis During recent consulting engagements, I've encountered a number of deployments that use Spring Integration Gateway specifications that may, in some circumstances, lead to production operational instability. This has often been in high-pressure environments or those where technology support is not backed by adequate training, testing, review or technology mentoring. How do gateways behave in Spring Integration (R2.0.5) One of the key sections, regarding gateways, in the Spring Integration manual clearly explains gateway semantics. Below is a 2-dimensional table of possible non-standard gateway returns for each of the scenarios that the SI Manual (r2.0.5) refers to. Gateway Non-standard Responses Runtime Events default-reply-timeout=x Single-threaded default-reply-timeout=x Multi-threaded default-reply-timeout=null Single-threaded default-reply-timeout=null Multi-threaded 1. Long Running Process Thread Parked null returned Thread Parked Thread Parked 2. Null Returned Downstream null returned null returned Thread Parked Thread Parked 3. void method Downstream null returned null returned Thread Parked Thread Parked 4. Runtime Exception Error handler invoked or exception thrown. Error handler invoked or exception thrown. Error handler invoked or exception thrown. Error handler invoked or exception thrown. The key parts of this table are the conditions that lead to invoking threads being parked (noted in red), nulls returned (noted in orange) and exceptions (noted in green). Each contributor consists of configuration that is under the developers control, deployed code that is under developers control and conditions that are usually not under developers control. Clearly, the column headings in the table above are divided into two sections; two gateway configuration attributes. The default-reply-timeout is set by the SI configured and is the amount of time that a client call is wiling to wait for a response from the gateway. Secondly, synchronous flows are represented by Single-threaded flows, asynchronous by Multi-threaded flows. A synchronous, or single-threaded flow, is one such as the following: The implicit input channel (gateway-request-channel) has no associated dispatcher configured. An asynchronous, or multi-threaded flow, is one such as the following: The explicit input channel has a dispatcher configured ("taskExecutor"). This task executor specifies a thread pool that supplies threads for execution and whose configuration as above marks a thread boundary. Note: This is not the only way of making channels asynchronous The other configuration attribute referenced is default-reply-timeout, this is set on the gateway namespace configuration such as the example above. Note that both of these runtime aspects are set by the configurer during SI flow design and implementation. They are entirely under developer control. The 'Runtime Events' column indicates gateway relevant runtime events that have to be considered during gateway configuration - these are obviously not under developer control. Trigger conditions for these events are not as unusual as one may hope. 1. Long Running Processes It's not uncommon for thread pools to become exhausted because all pooled threads are waiting for an external resource accessed through a socket, this may be a long running database query, a firewall keeping a connection open despite the server terminating etc. There is significant potential for these types of trigger. Some long-running processes terminate naturally, sometimes they never completed - an application restart is required. 2. Null returned downstream A null may be returned from a downstream SI construct such as a Transformer, Service Activator or Gateway. A Gateway may return null in some circumstances such as following a gateway timeout event. 3. Void method downstream Any custom code invoked during an SI flow may use a void method signature. This can also be caused by configuration in circumstances where flows are determined dynamically at runtime. 4. Runtime Exception RuntimeException's can be triggered during normal operation and are generally handled by catching them at the gateway or allowing them to propagate through. The reason that they are coloured green in the table above is that they are generally much easier to handle than timeouts. Gateway Timeout Handling Strategies There are four possible outcomes from invoking a gateway with a request message, all of these as a result of specific runtime events: a) an ordinary message response, b) an exception message, c) a null or d) no-response. Ordinary business responses and exceptions are straight forward to understand and will not be covered further in this article. The two significant outcomes that will be explored further are strategies for dealing with nulls and no-response. Generally speaking, long running processes either terminate or not. Long running processes that terminate may eventually return a message through the invoked gateway or timeout depending on timeout configuration, in which case a null may be returned. The severity of this as a problem depends on throughput volume, length of long running process and system resources (thread-pool size). Configuration exists for default-reply-timeout In the case where a long running process event is underway and a default-reply-timeout has been set, as long as the long running process completes before the default-reply-timeout expires, there is no problem to deal with. However, if the long running process does not complete before that timeout expires one of three outcomes will apply. Firstly, if the long running process terminates subsequent to the reply timeout expiry, the gateway will have already returned null to the invoker so the null response needs handling by the invoker. The thread handling the long-running process will be returned to the pool. Secondly, if the long running process does not terminate and a reply timeout has been set, the gateway will return null to the gateway invoker but the thread executing the long-running process will not get returned to the pool. Thirdly, and most significantly, if a default-reply-timeout has been configured but the long running process is running on the same thread as the invoker, i.e. synchronous channels supply messages to that process, the thread will not return, the default-reply-timeout has no affect. Assuming the most common processing scenario, a long running process completes either before or after the reply timeout expiry. When a null is returned by the gateway, the invoker is forced to deal with a null response. It's often unacceptable to force gateway consumers to deal with null responses and is not necessary as with a little additional configuration, this can be avoided. Absent Configuration for default-reply-timeout The most significant danger exists around gateways that have no default-reply-timeout configuration set. A long running process or a null returned from downstream will mean that the invoking thread is parked. This is true for both synchronous and asynchronous flows and may ultimately force an application to be restarted because the invoker thread pool is likely to start on a depletion course if this continues to occur. Spring Integration Timeout Handling Design Strategies For those Spring Integration configuration designers that are comfortable with gateway invokers dealing with null responses, exceptions and set default-reply-timeouts on gateways, there's no need to read further. However, if you wish to provide clients of your gateway a more predictable response, a couple of strategies exist for handling null responses from gateways in order that invokers are protected from having to deal with them. Firstly, the simpliest solution is to wrap the gateway with a service activator. The gateway must have the default-reply-timeout attribute value set in order to avoid unnecessary parking of threads. In order to avoid the consequence of long-running threads it's also very prudent to use a dispatcher soon after entry to the gateway - this breaks the thread boundary. Whilst this is a valid technical approach, the impact is that we have forced a different entry point to our message sub-system. Entry is now via a Service Activator rather than a Gateway. A side affect of this change is that the testing entry point changes. Integration tests that would normally reference a gateway to send a message now have to locate the backing implementation for the Service Activator, not ideal. An alternative approach toward solving this problem would be to configure two gateways with a Service Activator between them. Only one of the gateways would be exposed to invokers, the outer one. Both Gateways would reference the same service interface. The outer gateway specification would not specify the default-reply-timeout but would specify the input and output channels in the same way that a single gateway would. The Service Activator between the Gateways would handle null gateway responses and possibly any exceptions if preferred to the gateway error handler approach. An example is as follows: The Service Activator bean (enrollmentServiceGatewayHandler) deals with both null and exception responses from the adapted gateway (enrollmentServiceAdaptedGateway), in the situation where these are generated a business response detailing the error is generated. Spring Integration R2.1 Changes async-executor on gateway spec
May 26, 2012
by Matt Vickery
· 23,989 Views · 1 Like
article thumbnail
The Limited Usefulness of AsyncContext.start()
Some time ago I came across What's the purpose of AsyncContext.start(...) in Servlet 3.0? question. Quoting the Javadoc of aforementioned method: Causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable. To remind all of you, AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously. Basically HTTP request is no longer tied to an HTTP thread, allowing us to handle it later, possibly using fewer threads. It turned out that the specification provides an API to handle asynchronous threads in a different thread pool out of the box. First we will see how this feature is completely broken and useless in Tomcat and Jetty - and then we will discuss why the usefulness of it is questionable in general. Our test servlet will simply sleep for given amount of time. This is a scalability killer in normal circumstances because even though sleeping servlet is not consuming CPU, but sleeping HTTP thread tied to that particular request consumes memory - and no other incoming request can use that thread. In our test setup I limited the number of HTTP worker threads to 10 which means only 10 concurrent requests are completely blocking the application (it is unresponsive from the outside) even though the application itself is almost completely idle. So clearly sleeping is an enemy of scalability. @WebServlet(urlPatterns = Array("/*")) class SlowServlet extends HttpServlet with Logging { protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") } } Benchmarking this code reveals that the average response times are close to sleep parameter as long as the number of concurrent connections is below the number of HTTP threads. Unsurprisingly the response times begin to grow the moment we exceed the HTTP threads count. Eleventh connection has to wait for any other request to finish and release worker thread. When the concurrency level exceeds 100, Tomcat begins to drop connections - too many clients are already queued. So what about the the fancy AsyncContext.start() method (do not confuse with ServletRequest.startAsync())? According to the JavaDoc I can submit any Runnable and the container will use some managed thread pool to handle it. This will help partially as I no longer block HTTP worker threads (but still another thread somewhere in the servlet container is used). Quickly switching to asynchronous servlet: @WebServlet(urlPatterns = Array("/*"), asyncSupported = true) class SlowServlet extends HttpServlet with Logging { protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val asyncContext = req.startAsync() asyncContext.setTimeout(TimeUnit.MINUTES.toMillis(10)) asyncContext.start(new Runnable() { def run() { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() } }) } } We are first enabling the asynchronous processing and then simply moving sleep() into a Runnable and hopefully a different thread pool, releasing the HTTP thread pool. Quick stress test reveals slightly unexpected results (here: response times vs. number of concurrent connections): Guess what, the response times are exactly the same as with no asynchronous support at all (!) After closer examination I discovered that when AsyncContext.start() is called Tomcat submits given task back to... HTTP worker thread pool, the same one that is used for all HTTP requests! This basically means that we have released one HTTP thread just to utilize another one milliseconds later (maybe even the same one). There is absolutely no benefit of calling AsyncContext.start() in Tomcat. I have no idea whether this is a bug or a feature. On one hand this is clearly not what the API designers intended. The servlet container was suppose to manage separate, independent thread pool so that HTTP worker thread pool is still usable. I mean, the whole point of asynchronous processing is to escape the HTTP pool. Tomcat pretends to delegate our work to another thread, while it still uses the original worker thread pool. So why I consider this to be a feature? Because Jetty is "broken" in exactly same way... No matter whether this works as designed or is only a poor API implementation, using AsyncContext.start() in Tomcat and Jetty is pointless and only unnecessarily complicates the code. It won't give you anything, the application works exactly the same under high load as if there was no asynchronous logic at all. But what about using this API feature on correct implementations like IBM WAS? It is better, but still the API as is doesn't give us much in terms of scalability. To explain again: the whole point of asynchronous processing is the ability to decouple HTTP request from an underlying thread, preferably by handling several connections using the same thread. AsyncContext.start() will run the provided Runnable in a separate thread pool. Your application is still responsive and can handle ordinary requests while long-running request that you decided to handle asynchronously are processed in a separate thread pool. It is better, unfortunately the thread pool and thread per connection idiom is still a bottle-neck. For the JVM it doesn't matter what type of threads are started - they still occupy memory. So we are no longer blocking HTTP worker threads, but our application is not more scalable in terms of concurrent long-running tasks we can support. In this simple and unrealistic example with sleeping servlet we can actually support thousand of concurrent (waiting) connections using Servlet 3.0 asynchronous support with only one extra thread - and without AsyncContext.start(). Do you know how? Hint: ScheduledExecutorService. Postscriptum: Scala goodness I almost forgot. Even though examples were written in Scala, I haven't used any cool language features yet. Here is one: implicit conversions. Make this available in your scope: implicit def blockToRunnable[T](block: => T) = new Runnable { def run() { block } } And suddenly you can use code block instead of instantiating Runnable manually and explicitly: asyncContext start { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map { _.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() } Sweet!
May 22, 2012
by Tomasz Nurkiewicz DZone Core CORE
· 17,128 Views · 1 Like
article thumbnail
Lucene Setup on OracleDB in 5 Minutes
This tutorial is for people who want to run an Apache Lucene example with OracleDB in just five minutes.
May 19, 2012
by Mohammad Juma
· 30,871 Views · 4 Likes
article thumbnail
Virtualization in WPF with VirtualizingStackPanel
First blogged about this on my previous blog site here: http://consultingblogs.emc.com/merrickchaffer/archive/2011/02/14/virtualization-in-wpf-with-virtualizingstackpanel.aspx However, having come across this again today on a project, I thought it was important enough to re-blog! Finally managed to figure out how to get virtualization to actually behave itself in a listbox wpf control. Turns out that in order for Virtualization to work, you need three things satisfied. Use a control that supports virtualization (e.g. list box or list view). (see Controls That Implement Performance Features section at bottom of this page for more info http://msdn.microsoft.com/en-us/library/cc716879.aspx#Controls ) Ensure that the ScrollViewer.CanContentScroll attached property is set to True on the containing list box / list view control. Ensure that either the list box has a height set, or that it is contained within a parent Grid row, where that row definition has a height set (Height="*" will do if you want it to occupy the Client window height). Note: Do not use height=”Auto” as this will not work, as this instructs WPF to simply size the row to the height needed to fit all the items of the list box in, hence you do not get the vertical scroll bar appearing. Ensure that there is no wrapping ScrollViewer control around the list box, as this will prevent virtualization from occuring. Ensure that you use a VirtualizingStackPanel in the ItemsPanelTemplate for the ListBox.ItemsPanel Example
May 14, 2012
by Merrick Chaffer
· 27,885 Views
article thumbnail
EasyNetQ, a simple .NET API for RabbitMQ
After pondering the results of our message queue shootout, we decided to run with Rabbit MQ. Rabbit ticks all of the boxes, it’s supported (by Spring Source and then VMware ultimately), scales and has the features and performance we need. The RabbitMQ.Client provided by Spring Source is a thin wrapper that quite faithfully exposes the AMQP protocol, so it expects messages as byte arrays. For the shootout tests spraying byte arrays around was fine, but in the real world, we want our messages to be .NET types. I also wanted to provide developers with a very simple API that abstracted away the Exchange/Binding/Queue model of AMQP and instead provides a simple publish/subscribe and request/response model. My inspiration was the excellent work done by Dru Sellers and Chris Patterson with MassTransit (the new V2.0 beta is just out). The code is on GitHub here: https://github.com/mikehadlow/EasyNetQ The API centres around an IBus interface that looks like this: /// /// Provides a simple Publish/Subscribe and Request/Response API for a message bus. /// public interface IBus : IDisposable { /// /// Publishes a message. /// /// The message type /// The message to publish void Publish(T message); /// /// Subscribes to a stream of messages that match a .NET type. /// /// The type to subscribe to /// /// A unique identifier for the subscription. Two subscriptions with the same subscriptionId /// and type will get messages delivered in turn. This is useful if you want multiple subscribers /// to load balance a subscription in a round-robin fashion. /// /// /// The action to run when a message arrives. /// void Subscribe(string subscriptionId, Action onMessage); /// /// Makes an RPC style asynchronous request. /// /// The request type. /// The response type. /// The request message. /// The action to run when the response is received. void Request(TRequest request, Action onResponse); /// /// Responds to an RPC request. /// /// The request type. /// The response type. /// /// A function to run when the request is received. It should return the response. /// void Respond(Func responder); } To create a bus, just use a RabbitHutch, sorry I couldn’t resist it :) var bus = RabbitHutch.CreateRabbitBus("localhost"); You can just pass in the name of the server to use the default Rabbit virtual host ‘/’, or you can specify a named virtual host like this: var bus = RabbitHutch.CreateRabbitBus("localhost/myVirtualHost"); The first messaging pattern I wanted to support was publish/subscribe. Once you’ve got a bus instance, you can publish a message like this: var message = new MyMessage {Text = "Hello!"}; bus.Publish(message); This publishes the message to an exchange named by the message type. You subscribe to a message like this: bus.Subscribe("test", message => Console.WriteLine(message.Text)); This creates a queue named ‘test_’ and binds it to the message type’s exchange. When a message is received it is passed to the Action delegate. If there are more than one subscribers to the same message type named ‘test’, Rabbit will hand out the messages in a round-robin fashion, so you get simple load balancing out of the box. Subscribers to the same message type, but with different names will each get a copy of the message, as you’d expect. The second messaging pattern is an asynchronous RPC. You can call a remote service like this: var request = new TestRequestMessage {Text = "Hello from the client! "}; bus.Request(request, response => Console.WriteLine("Got response: '{0}'", response.Text)); This first creates a new temporary queue for the TestResponseMessage. It then publishes the TestRequestMessage with a return address to the temporary queue. When the TestResponseMessage is received, it passes it to the Action delegate. RabbitMQ happily creates temporary queues and provides a return address header, so this was very easy to implement. To write an RPC server. Simple use the Respond method like this: bus.Respond(request => new TestResponseMessage { Text = request.Text + " all done!" }); This creates a subscription for the TestRequestMessage. When a message is received, the Func delegate is passed the request and returns the response. The response message is then published to the temporary client queue. Once again, scaling RPC servers is simply a question of running up new instances. Rabbit will automatically distribute messages to them. The features of AMQP (and Rabbit) make creating this kind of API a breeze. Check it out and let me know what you think.
May 13, 2012
by Mike Hadlow
· 10,944 Views
article thumbnail
Martin Fowler on ORM Hate
while i was at the qcon conference in london a couple of months ago, it seemed that every talk included some snarky remarks about object/relational mapping (orm) tools. i guess i should read the conference emails sent to speakers more carefully, doubtless there was something in there telling us all to heap scorn upon orms at least once every 45 minutes. but as you can tell, i want to push back a bit against this orm hate - because i think a lot of it is unwarranted. the charges against them can be summarized in that they are complex, and provide only a leaky abstraction over a relational data store. their complexity implies a grueling learning curve and often systems using an orm perform badly - often due to naive interactions with the underlying database. there is a lot of truth to these charges, but such charges miss a vital piece of context. the object/relational mapping problem is hard . essentially what you are doing is synchronizing between two quite different representations of data, one in the relational database, and the other in-memory. although this is usually referred to as object-relational mapping, there is really nothing to do with objects here. by rights it should be referred to as in-memory/relational mapping problem, because it's true of mapping rdbmss to any in-memory data structure. in-memory data structures offer much more flexibility than relational models, so to program effectively most people want to use the more varied in-memory structures and thus are faced with mapping that back to relations for the database. the mapping is further complicated because you can make changes on either side that have to be mapped to the other. more complication arrives since you can have multiple people accessing and modifying the database simultaneously. the orm has to handle this concurrency because you can't just rely on transactions- in most cases, you can't hold transactions open while you fiddle with the data in-memory. i think that if you if you're going to dump on something in the way many people do about orms, you have to state the alternative. what do you do instead of an orm? the cheap shots i usually hear ignore this, because this is where it gets messy. basically it boils down to two strategies, solve the problem differently (and better), or avoid the problem. both of these have significant flaws. a better solution listening to some critics, you'd think that the best thing for a modern software developer to do is roll their own orm. the implication is that tools like hibernate and active record have just become bloatware, so you should come up with your own lightweight alternative. now i've spent many an hour griping at bloatware, but orms really don't fit the bill - and i say this with bitter memory. for much of the 90's i saw project after project deal with the object/relational mapping problem by writing their own framework - it was always much tougher than people imagined. usually you'd get enough early success to commit deeply to the framework and only after a while did you realize you were in a quagmire - this is where i sympathize greatly with ted neward's famous quote that object-relational mapping is the vietnam of computer science [1] . the widely available open source orms (such as ibatis, hibernate, and active record) did a great deal to remove this problem [2] . certainly they are not trivial tools to use, as i said the underlying problem is hard, but you don't have to deal with the full experience of writing that stuff (the horror, the horror). however much you may hate using an orm, take my word for it - you're better off. i've often felt that much of the frustration with orms is about inflated expectations. many people treat the relational database "like a crazy aunt who's shut up in an attic and whom nobody wants to talk about" [3] . in this world-view they just want to deal with in-memory data-structures and let the orm deal with the database. this way of thinking can work for small applications and loads, but it soon falls apart once the going gets tough. essentially the orm can handle about 80-90% of the mapping problems, but that last chunk always needs careful work by somebody who really understands how a relational database works. this is where the criticism comes that orm is a leaky abstraction. this is true, but isn't necessarily a reason to avoid them. mapping to a relational database involves lots of repetitive, boiler-plate code. a framework that allows me to avoid 80% of that is worthwhile even if it is only 80%. the problem is in me for pretending it's 100% when it isn't. david heinemeier hansson, of active record fame, has always argued that if you are writing an application backed by a relational database you should damn well know how a relational database works. active record is designed with that in mind, it takes care of boring stuff, but provides manholes so you can get down with the sql when you have to. that's a far better approach to thinking about the role an orm should play. there's a consequence to this more limited expectation of what an orm should do. i often hear people complain that they are forced to compromise their object model to make it more relational in order to please the orm. actually i think this is an inevitable consequence of using a relational database - you either have to make your in-memory model more relational, or you complicate your mapping code. i think it's perfectly reasonable to have a more relational domain model in order to simplify your object-relational mapping. that doesn't mean you should always follow the relational model exactly, but it does mean that you take into account the mapping complexity as part of your domain model design. so am i saying that you should always use an existing orm rather than doing something yourself? well i've learned to always avoid saying "always". one exception that comes to mind is when you're only reading from the database. orms are complex because they have to handle a bi-directional mapping. a uni-directional problem is much easier to work with, particularly if your needs aren't too complex and you are comfortable with sql. this is one of the arguments for cqrs . so most of the time the mapping is a complicated problem, and you're better off using an admittedly complicated tool than starting a land war in asia. but then there is the second alternative i mentioned earlier - can you avoid the problem? avoiding the problem to avoid the mapping problem you have two alternatives. either you use the relational model in memory, or you don't use it in the database. to use a relational model in memory basically means programming in terms of relations, right the way through your application. in many ways this is what the 90's crud tools gave you. they work very well for applications where you're just pushing data to the screen and back, or for applications where your logic is well expressed in terms of sql queries. some problems are well suited for this approach, so if you can do this, you should. but its flaw is that often you can't. when it comes to not using relational databases on the disk, there rises a whole bunch of new champions and old memories. in the 90's many of us (yes including me) thought that object databases would solve the problem by eliminating relations on the disk. we all know how that worked out. but there is now the new crew of nosql databases - will these allow us to finesse the orm quagmire and allow us to shock-and-awe our data storage? as you might have gathered , i think nosql is technology to be taken very seriously. if you have an application problem that maps well to a nosql data model - such as aggregates or graphs - then you can avoid the nastiness of mapping completely. indeed this is often a reason i've heard teams go with a nosql solution. this is, i think, a viable route to go - hence my interest in increasing our understanding of nosql systems. but even so it only works when the fit between the application model and the nosql data model is good. not all problems are technically suitable for a nosql database. and of course there are many situations where you're stuck with a relational model anyway. maybe it's a corporate standard that you can't jump over, maybe you can't persuade your colleagues to accept the risks of an immature technology. in this case you can't avoid the mapping problem. so orms help us deal with a very real problem for most enterprise applications. it's true they are often misused, and sometimes the underlying problem can be avoided. they aren't pretty tools, but then the problem they tackle isn't exactly cuddly either. i think they deserve a little more respect and a lot more understanding. 1: i have to confess a deep sense of conflict with the vietnam analogy. at one level it seems like a case of the pathetic overblowing of software development's problems to compare a tricky technology to war. nasty the programming may be, but you're still in a relatively comfy chair, usually with air conditioning, and bug-hunting doesn't involve bullets coming at you. but on another level, the phrase certainly resonates with the feeling of being sucked into a quagmire. 2: there were also commercial orms, such as toplink and kodo. but the approachability of open source tools meant they became dominant. 3: i like this phrase so much i feel compelled to subject it to re-use.
May 9, 2012
by Martin Fowler
· 114,652 Views · 4 Likes
article thumbnail
Apache Commons Lang StringUtils
So, thought it'd be good to talk about another Java library that I like. It's been around for a while and is not perhaps the most exciting library, but it is very very useful. I probably make use of it daily. org.apache.commons.lang.StringUtils StringUtils is part of Apache Commons Lang (http://commons.apache.org/lang/, and as the name suggest it provides some nice utilities for dealing with Strings, going beyond what is offered in java.lang.String. It consists of over 50 static methods, and I'm not going to cover every single one of them, just a selection of methods that I make the most use of. There are two different versions available, the newer org.apache.commons.lang3.StringUtils and the older org.apache.commons.lang.StringUtils. There are not really any significant differences between the two. lang3.StringUtils requires Java 5.0 and is probably the version you'll want to use. public static boolean equals(CharSequence str1, CharSequence str2) Thought I'd start with one of the most straight forward methods. equals. This does exactly what you'd expect, it takes two Strings and returns true if they are identical, or false if they're not. But java.lang.String already has a perfectly good equals method? Why on earth would I want to use a third party implementation? It's a fair question. Let's look at some code, can you see any problems? public void doStuffWithString(String stringParam) { if(stringParam.equals("MyStringValue")) { // do stuff } } That's a NullPointerException waiting to happen! There are a couple of ways around this: public void safeDoStuffWithString1(String stringParam) { if(stringParam != null && stringParam.equals("MyStringValue")) { // do stuff } } public void safeDoStuffWithString2(String stringParm) { if("MyStringValue".equals(stringParam)) { // do stuff } } Personally I'm not a fan of either method. I think null checks pollute code, and to me "MyStringValue".equals(stringParam) just doesn't scan well, it looks wrong. This is where StringUtils.equals comes in handy, it's null safe. It doesn't matter what you pass it, it won't NullPointer on you! So you could rewrite the simple method as follows: public void safeDoStuffWithString3(String stringParam) { if(StringUtils.equals(stringParam,"MyStringValue)) { // do stuff } } It's personal preference, but I think this reads better than the first two examples. There's nothing wrong with them, but I do think StringUtils.equals() is worth considering. isEmpty, isNotEmpty, isBlank, isNotBlank OK, these look pretty self explanatory, I'm guessing they're all null safe? You're probably spotting a pattern here. isEmpty is indeed a null safe replacement for java.lang.String.isEmpty(), and isNotEmpty is it's inverse. So no more null checks: if(myString != null && !myString.isEmpty()) { // urghh // Do stuff with myString } if(StringUtils.isNotEmpty(myString)) { // much nicer // Do stuff with myString } So, why Blank and Empty? There is a difference, isBlank also returns true if the String just contains whitespace, ie... String someWhiteSpace = " \t \n"; StringUtils.isEmpty(someWhiteSpace); // false StringUtils.isBlank(someWhiteSpace); // true public static String[] split(String str, String separatorChars) Right that looks just like String.split(), so this is just a null safe version of the built in Java method? Well, yes it certainly is null safe. Trying to split a null string results in null, and a null separator splits on whitespace. But there is another reason you should consider using StringUtils.split(...), and that's the fact that java.lang.String.split takes a regular expression as a separator. For example the following may not do what you want: public void possiblyNotWhatYouWant() { String contrivedExampleString = "one.two.three.four"; String[] result = contrivedExampleString.split("."); System.out.println(result.length); // 0 } But all I have to do is put a couple of backslashes in front of the '.' and it will work fine. It's not really a big deal is it? Perhaps not, but there's one last advantage to using StringUtils.split, and that's the fact that regular expressions are expensive. In fact when I tested splitting a String on a comma (a fairly common use case in my experience), StingUtils.split runs over four times faster! public static String join(Iterable iterable, String separator) Ah, finally something genuinely useful! Indeed I've never found an elegant way of concatenating strings with a separator, there's always that annoying conditional require to check if want to insert the separator or not. So it's nice there's a utility to this for me. Here's a quick example: String[] numbers = {"one", "two", "three"}; StringUtils.join(numbers,","); // returns "one,two,three" There's also various overloaded versions of join that take Arrays, and Iterators. Ok, I'm convinced. This looks like a pretty useful library, what else can it do? Quite a lot, but like I said earlier I won't bother going through every single method available, I'd just end up repeating what's said in the API documentation. I'd really recommend taking a closer look: http://commons.apache.org/lang/api-3.1/org/apache/commons/lang3/StringUtils.html So basically if you ever need to do something with a String that isn't covered by Java's core String library (and maybe even stuff that is), take a look at StringUtils.
May 5, 2012
by Tom Jefferys
· 33,552 Views
article thumbnail
Lean tools: Options thinking
We now have finished exploring the Lean tools for amplifying learning like feedback, iterations and set-based development. We enter the real of the 3rd Lean principle, Decide as late as possible. This principle is oriented to postpone decisions as long as the delay does not impact the product, in order to gain more flexibility instead of becoming locked in with some initial design decisions. Software is easy to rebuild from source code, but its architecture is not always malleable by default as non-technical people would think. Moreover, there are some changes which will always happen, like upgrade of libraries and operating systems, which complements change in requirements or integration ports. The easiest decision to change is the one that has not been made yet. Options Thinking The first tool that helps in postponing decisions is Options Thinking: the introduction of mechanisms whose specific purpose is to enable delaying decisions. In the financial domain, an option is the right to buy a good at a certain price before a future date comes - effectively transferring the decision of buying shares or products some time in the future, as options can expire without being exercised. A simpler instance of Options Thinking cited by Mary Poppendieck is an hotel reservation: you invest a small sum of money (the reservation fee) to book a room; exercising the option means actually going to the hotel, a decision which is made only when the time comes. Trains and airlines often use the same pricing model for seats (even if we do not consider the rise of prices as a flight is being filled). There are multiple types of tickets for each combination of flight and date: some basic and not transferrable or refundable, some more costly that provide the option of changing the date or to get a partial or total refund. Agile Mary Poppendieck adds the insight that Agile software development is a process that creates many options by introducing a very flexible plan and only prescribing more detailed actions after several inspect and adapt loops. It's not bad to delay a commitment until you know more about a problem: forced early decisions are the mark of waterfall (actually of the mainstream version of waterfall). But options do not come for free: for example, in order to simplify a technical decision, XP suggests to create throwaway code. These spikes are the exploration of each potential solution, which in a certain sense are a waste of development time as their final result is of low quality and usually thrown away. However, spikes produces knowledge about the solution that results in a better estimate for its full development or in its abandonment. The decision to adopt a technology or of which solution to adopt is delayed until the end of a spike, but this option pay itself quickly as uncertainty is removed and decisions "get it right" with an higher probability. Real world examples Almost any application I have been involved with in the last two years has had the separation of a persistence layer as one of the goals: Active Record has been progressively abandoned in the PHP world to favor Data Mappers like the Doctrine ORM and ODMs. As for all options that can be bought, this separation does not come for free: development is a little slower when Repositories are objects that have to be designed instead of just a bunch of static calls to the Entity class like User::find() (although there are benefits of the Data Mapper approach that go beyond keeping options open.) An isolated persistence layer, however, allows us to postpone fundamental decisions about the database to use: it's a rough time for many of them as licenses change (MySQL) or new NoSQL solutions come out and evolve. Every month of development where you're not tied to a specific database is a month where the hype goes down and we move towards more mature solutions that we can choose with a greater knowledge of the requirements of our data. Do we need relational database consistency? Or a schema-less store? Moreover, the investment in persistence adapters separated from the core of the application let us able to choose different databases for different bounded contexts of an application; for example, storing views in a relational database and the primary database as a set of aggregates in Couch or Mongo. Conclusion I will never advocate to invest in an option just for the sake of the technical challenge, nor that they come for free; but once you recognize postponing a decision freezing is valuable for the project, there should be really no issue in go and buying it.
May 2, 2012
by Giorgio Sironi
· 9,847 Views
article thumbnail
yield(), sleep(0), wait(0,1) and parkNanos(1)
On the surface these methods do the same thing in Java; Thread.yield(), Thread.sleep(0), Object.wait(0,1) and LockSupport.parkNanos(1) They all wait a sort period of time, but how much that is varies a surprising amount and between platforms. Timing a short delay The following code times how long it takes to repeatedly call those methods. import java.util.concurrent.locks.LockSupport; public class Pausing { public static void main(String... args) throws InterruptedException { int repeat = 10000; for (int i = 0; i < 3; i++) { long time0 = System.nanoTime(); for (int j = 0; j < repeat; j++) Thread.yield(); long time1 = System.nanoTime(); for (int j = 0; j < repeat; j++) Thread.sleep(0); long time2 = System.nanoTime(); synchronized (Thread.class) { for (int j = 0; j < repeat/10; j++) Thread.class.wait(0, 1); } long time3 = System.nanoTime(); for (int j = 0; j < repeat/10; j++) LockSupport.parkNanos(1); long time4 = System.nanoTime(); System.out.printf("The average time to yield %.1f μs, sleep(0) %.1f μs, " + "wait(0,1) %.1f μs and LockSupport.parkNanos(1) %.1f μs%n", (time1 - time0) / repeat / 1e3, (time2 - time1) / repeat / 1e3, (time3 - time2) / (repeat/10) / 1e3, (time4 - time3) / (repeat/10) / 1e3); } } } On Windows 7 The average time to yield 0.3 μs, sleep(0) 0.6 μs, wait(0,1) 999.9 μs and LockSupport.parkNanos(1) 1000.0 μs The average time to yield 0.3 μs, sleep(0) 0.6 μs, wait(0,1) 999.5 μs and LockSupport.parkNanos(1) 1000.1 μs The average time to yield 0.2 μs, sleep(0) 0.5 μs, wait(0,1) 1000.0 μs and LockSupport.parkNanos(1) 1000.1 μs On RHEL 5.x The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2003.8 μs and LockSupport.parkNanos(1) 3.8 μs The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2004.8 μs and LockSupport.parkNanos(1) 3.4 μs The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2005.6 μs and LockSupport.parkNanos(1) 3.1 μs In summary If you want to wait for a short period of time, you can't assume that all these methods do the same thing, nor will be the same between platforms.
April 27, 2012
by Peter Lawrey
· 9,384 Views
article thumbnail
Managing and Monitoring Drupal Sites on Windows Azure
A few weeks ago, I co-authored an article (with my colleague Rama Ramani) about how the Screen Actors Guild Awards website migrated its Drupal deployment from LAMP to Windows Azure: Azure Real World: Migrating a Drupal Site from LAMP to Windows Azure. Since then, Rama and another colleague, Jason Roth, have been working on writing up how the SAG Awards website was managed and monitored in Windows Azure. The article below is the fruit of their work…a very interesting/educational read. Overview Drupal is an open source content management system that runs on PHP. Windows Azure offers a flexible platform for hosting, managing, and scaling Drupal deployments. This paper focuses on an approach to host Drupal sites on Windows Azure, based on learning from a BPD Customer Programs Design Win engagement with the Screen Actors Guild Awards Drupal website. This paper covers guidelines and best practices for managing an existing Drupal web site in Windows Azure. For more information on how to migrate Drupal applications to Windows Azure, see Azure Real World: Migrating a Drupal Site from LAMP to Windows Azure. The target audience for this paper is Drupal administrators who have some exposure to Windows Azure. More detailed pointers to Windows Azure content is provided throughout the paper as links. Drupal Application Architecture on Windows Azure Before reviewing the management and monitoring guidelines, it is important to understand the architecture of a typical Drupal deployment on Windows Azure. First, the following diagram displays the basic architecture of Drupal running on Windows and IIS7. In the Windows Server scenario, you could have one or more machines hosting the web site in a farm. Those machines would either persist the site content to the file system or point to other network shares. For Windows Azure, the basic architecture is the same, but there are some differences. In Windows Azure the site is hosted on a web role. A web role instance is hosted on a Windows Server 2008 virtual machine within the Windows Azure datacenter. Like the web farm, you can have multiple instances running the site. But there is no persistence guarantee for the data on the file system. Because of this, much of the shared site content should be stored in Windows Azure Blob storage. This allows them to be highly available and durable. Usually, a large portion of the site caters to static content which lends well to caching. And caching can be applied in a set of places – browser level caching, CDN to cache content in the edge closer to the browser clients, caching in Azure to reduce the load on backend, etc. Finally, the database can be located in SQL Azure. The following diagram shows these differences. For monitoring and management, we will look at Drupal on Windows Azure from three perspectives: Availability: Ensure the web site does not go down and that all tiers are setup correctly. Apply best practices to ensure that the site is deployed across data centers and perform backup operations regularly. Scalability: Correctly handle changes in user load. Understand the performance characteristics of the site. Manageability: Correctly handle updates. Make code and site changes with no downtime when possible. Although some management tasks span one or more of these categories, it is still helpful to discuss Drupal management on Windows Azure within these focus areas. Availability One main goal is that the Drupal site remains running and accessible to all end-users. This involves monitoring both the site and the SQL Azure database that the site depends on. In this section, we will briefly look at monitoring and backup tasks. Other crossover areas that affect availability will be discussed in the next section on scalability. Monitoring With any application, monitoring plays an important role with managing availability. Monitoring data can reveal whether users are successfully using the site or whether computing resources are meeting the demand. Other data reveals error counts and possibly points to issues in a specific tier of the deployment. There are several monitoring tools that can be used. The Windows Azure Management Portal. Windows Azure diagnostic data. Custom monitoring scripts. System Center Operations Manager. Third party tools such as Azure Diagnostics Manager and Azure Storage Explorer. The Windows Azure Management Portal can be used to ensure that your deployments are successful and running. You can also use the portal to manage features such as Remote Desktop so that you can directly connect to machines that are running the Drupal site. Windows Azure diagnostics allows you to collect performance counters and logs off of the web role instances that are running the Drupal site. Although there are many options for configuring diagnostics in Azure, the best solution with Drupal is to use a diagnostics configuration file. The following configuration file demonstrates some basic performance counters that can monitor resources such as memory, processor utilization, and network bandwidth. For more information about setting up diagnostic configuration files, see How to Use the Windows Azure Diagnostics Configuration File. This information is stored locally on each role instance and then transferred to Windows Azure storage per a defined schedule or on-demand. See Getting Started with Storing and Viewing Diagnostic Data in Windows Azure Storage. Various monitoring tools, such as Azure Diagnostics Manager, help you to more easily analyze diagnostic data. Monitoring the performance of the machines hosting the Drupal site is only part of the story. In order to plan properly for both availability and scalability, you should also monitor site traffic, including user load patterns and trends. Standard and custom diagnostic data could contribute to this, but there are also third-party tools that monitor web traffic. For example, if you know that spikes occur in your application during certain days of the week, you could make changes to the application to handle the additional load and increase the availability of the Drupal solution. Backup Tasks To remain highly available, it is important to backup your data as a defense-in-depth strategy for disaster recovery. This is true even though SQL Azure and Windows Azure Storage both implement redundancy to prevent data loss. One obvious reason is that these services cannot prevent administrator error if data is accidentally deleted or incorrectly changed. SQL Azure does not currently have a formal backup technology, although there are many third-party tools and solutions that provide this capability. Usually the database size for a Drupal site is relatively small. In the case of SAG Awards, it was only ~100-150 MB. So performing an entire backup using any strategy was relatively fast. If your database is much larger, you might have to test various backup strategies to find the one that works best. Apart from third-party SQL Azure backup solutions, there are several strategies for obtaining a backup of your data: · Use the Drush tool and the portabledb-export command. · Periodically copy the database using the CREATE DATABASE Transact-SQL command. · Use Data-tier applications (DAC) to assist with backup and restore of the database. SQL Azure backup and data security techniques are described in more detail in the topic, Business Continuity in SQL Azure. Note that bandwidth costs accrue with any backup operation that transfers information outside of the Windows Azure datacenter. To reduce costs, you can copy the database to a database within the same datacenter. Or you can export the data-tier applications to blob storage in the same datacenter. Another potential backup task involves the files in Blob storage. If you keep a master copy of all media files uploaded to Blob storage, then you already have an on-premises backup of those files. However, if multiple administrators are loading files into Blob storage for use on the Drupal site, it is a good idea to enumerate the storage account and to download any new files to a central location. The following PHP script demonstrates how this can be done by backing up all files in Blob storage after a specified modification date. setProxy(true, 'YOUR_PROXY_IF_NEEDED', 80); $blobs = (array)$blobObj->listBlobs(AZURE_STORAGE_CONTAINER, '', '', 35000); backupBlobs($blobs, $blobObj); function backupBlobs($blobs, $blobObj) { foreach ($blobs as $blob) { if (strtotime($blob->lastmodified) >= DEFAULT_BACKUP_FROM_DATE && strtotime($blob->lastmodified) <= DEFAULT_BACKUP_TO_DATE) { $path = pathinfo($blob->name); if ($path['basename'] != '$$$.$$$') { $dir = $path['dirname']; $oldDir = getcwd(); if (handleDirectory($dir)) { chdir($dir); $blobObj->getBlob( AZURE_STORAGE_CONTAINER, $blob->name, $path['basename'] ); chdir($oldDir); } } } } } function handleDirectory($dir) { if (!checkDirExists($dir)) { return mkdir($dir, 0755, true); } return true; } function checkDirExists($dir) { if(file_exists($dir) && is_dir($dir)) { return true; } return false; } ?> This script has a dependency on the Windows Azure SDK for PHP. Also note there are several parameters that you must modify such as the storage account, secret, and backup location. As with SQL Azure, bandwidth and transaction charges apply to a backup script like this. Scalability Drupal sites on Windows Azure can scale as load increased through typical strategies of scale-up, scale-out, and caching. The following sections describe the specifics of how these strategies are implemented in Windows Azure. Typically you make scalability decisions based on monitoring and capacity planning. Monitoring can be done in staging during testing or in production with real-time load. Capacity planning factors in projections for changes in user demand. Scale Up When you configure your web role prior to deployment, you have the option of specifying the Virtual Machine (VM) size, such as Small or ExtraLarge. Each size tier adds additional memory, processing power, and network bandwidth to each instance of your web role. For cost efficiency and smaller units of scale, you can test your application under expected load to find the smallest virtual machine size that meets your requirements. The workload usually in most popular Drupal websites can be separated out into a limited set of Drupal admins making content changes and a large user base who perform mostly read-only workload. End users can be allowed to make ‘writes’, such as uploading blogs or posting in forums, but those changes are not ‘content changes’. Drupal admins are setup to operate without caching so that the writes are made directly to SQL Azure or the corresponding backend database. This workload performs well with Large or ExtraLarge VM sizes. Also, note that the VM size is closely tied to all hardware resources, so if there are many content-rich pages that are streaming content, then the VM size requirements are higher. To make changes to the Virtual Machine size setting, you must change the vmsize attribute of the WebRole element in the service definition file, ServiceDefinition.csdef. A virtual machine size change requires existing applications to be redeployed. Scale Out In addition to the size of each web role instance, you can increase or decrease the number of instances that are running the Drupal site. This spreads the web requests across more servers, enabling the site to handle more users. To change the number of running instances of your web role, see How to Scale Applications by Increasing or Decreasing the Number of Role Instances. Note that some configuration changes can cause your existing web role instances to recycle. You can choose to handle this situation by applying the configuration change and continue running. This is done by handling the RoleEnvironment.Changing event. For more information see, How to Use the RoleEnvironment.Changing Event. A common question for any Windows Azure solution is whether there is some type of built-in automatic scaling. Windows Azure does not provide a service that provides auto-scaling. However, it is possible to create a custom solution that scales Azure services using the Service Management API. For an example of this approach, see An Auto-Scaling Module for PHP Applications in Windows Azure. Caching Caching is an important strategy for scaling Drupal applications on Windows Azure. One reason for this is that SQL Azure implements throttling mechanisms to regulate the load on any one database in the cloud. Code that uses SQL Azure should have robust error handling and retry logic to account for this. For more information, see Error Messages (SQL Azure Database). Because of the potential for load-related throttling as well as for general performance improvement, it is strongly recommended to use caching. Although Windows Azure provides a Caching service, this service does not currently have interoperability with PHP. Because of this, the best solution for caching in Drupal is to use a module that uses an open-source caching technology, such as Memcached. Outside of a specific Drupal module, you can also configure Memcached to work in PHP for Windows Azure. For more information, see Running Memcached on Windows Azure for PHP. Here is also an example of how to get Memcached working in Windows Azure using a plugin: Windows Azure Memcached plugin. In a future paper, we hope to cover this architecture in more detail. For now, here are several design and management considerations related to caching. Area Consideration Design and Implementation For a technology like Memcached, will the cache be collocated (spread across all web role instances)? Or will you attempt to setup a dedicated cache ring with worker roles that only run Memcached? Configuration What memory is required and how will items in the cache be invalidated? Performance and Monitoring What mechanisms will be used to detect the performance and overall health of the cache? For ease of use and cost savings, collocation of the cache across the web role instances of the Drupal site works best. However, this assumes that there is available reserve memory on each instance to apply toward caching. It is possible to increase the virtual machine size setting to increase the amount of available memory on each machine. It is also possible to add additional web role instances to add to the overall memory of the cache while at the same time improving the ability of the web site to respond to load. It is possible to create a dedicated cache cluster in the cloud, but the steps for this are beyond the scope of this paper[RR1] . For Windows Azure Blob storage, there is also a caching feature built into the service called the Content Delivery Network (CDN). CDN provides high-bandwidth access to files in Blob storage by caching copies of the files in edge nodes around the world. Even within a single geographic region, you could see performance improvements as there are many more edge nodes than Windows Azure datacenters. For more information, see Delivering High-Bandwidth Content with the Windows Azure CDN. Manageability It is important to note that each hosted service has a Staging environment and a Production environment. This can be used to manage deployments, because you can load and test and application in staging before performing a VIP swap with production. From a manageability standpoint, Drupal has an advantage on Windows Azure in the way that site content is stored. Because the data necessary to serve pages is stored in the database and blob storage, there is no need to redeploy the application to change the content of the site. Another best practice is to use a separate storage account for diagnostic data than the one that is used for the application itself. This can improve performance and also helps to separate the cost of diagnostic monitoring from the cost of the running application. As mentioned previously, there are several tools that can assist with managing Windows Azure applications. The following table summarizes a few of these choices. Tool Description Windows Azure Management Portal The web interface of the Windows Azure management portal shows deployments, instance counts and properties, and supports many different common management and monitoring tasks. Azure Diagnostics Managerq[RR2] [JR3] A Red Gate Software product that provides advanced monitoring and management of diagnostic data. This tool can be very useful for easily analyzing the performance of the Drupal site to determine appropriate scaling decisions. Azure Storage Explorer A tool created by Neudesic for viewing Windows Azure storage account. This can be useful for viewing both diagnostic data and the files in Blob storage.
April 25, 2012
by Brian Swan
· 8,463 Views
article thumbnail
Algorithm of the Week: How to Determine the Day of the Week
Do you know what day of the week was the day you were born? Monday or maybe Saturday? Well, perhaps you know that. Everybody knows the day he’s born on, but do you know what day was the 31st of January in 1883? No? Well, there must be some method to determine any day in any century. We know that 2012 started at Sunday. After we know that, it’s easy to determine what day is the 2nd of January. It should be Monday. But things get a little more complex if we try to guess some date distant from January the 1st. Indeed 1st of Jan was on Sunday, but what day is 9th of May the same year. This is far more difficult to say. Of course we can go with a brute force approach and count from 1/Jan till 9/May, but that is quite slow and error prone. So what would we do if we had to code a program that answers this question? The easiest way is to use a library. Almost every major library has built-in functions that can answer what day is on a given date. Such are date() in PHP or getDate() in JavaScript. But the question remains: How these library functions know the answer and how can we code such library functions if our library doesn’t support such functionality? There must be some algorithm to help us. Overview Because months have different number of days, and most of them aren’t divisible by 7 without a remainder, months begin on different days of the week. Thus, if January begins on Sunday, the month of February the same year will begin on Wednesday. Of course, in common years February has 28 days, which fortunately is divisible by 7 and thus February and March both begin on the same day, which is great, but isn’t true for leap years. What Do We Know About the Calendar First thing to know is that each week has exactly 7 days. We also know that a common year has 365 days, while a leap year has one day more – 366. Most of the months have 30 or 31 days, but February has only 28 days in common years and 29 in leap years. Because 365 mod 7 = 1 in a common year each year begins exactly on the next day of the preceding year. Thus if 2011 started on Saturday, 2012 starts on Sunday. And yet again, that is because 2011 is not a leap year. What else do we know? Because a week has exactly seven days only February (with its 28 days in a common year) is divisible by 7 (28 mod 7 = 0) and has exactly four weeks in it. Thus in a common year February and March start on a same day. Unfortunately that is not true about the other months. All these things we know about the calendar are great, so we can make some conclusions. Although eleven of the months have either 30 or 31 days they don’t start on a same day, but some of the months do appear to start on a same day just because the number of days between them is divisible by 7 without a remainder. Let’s take a look on some examples. For instance September has 30 days, as does November, while October, which is in between them has 31 days. Thus 30+30+31 makes 91. Fortunately 91 mod 7 = 0. So for each year September and December start on the same day (as they are after February they don’t depend on leap years). The same thing occurs to April and July and the good news is that in leap years even January starts on the same day as April and July. Now we know that there are some relations between months. Thus, if we know somehow that the 13th of April is Monday, we’ll be sure that 13th of July is also Monday. Let’s see now a summary of these observations. We can also refer to the following diagram. For leap years there are other corresponding months. Let’s take a look at the following image. Another way to get the same information is the following table. We also know that leap years happen to occur once every four years. However, if there is a common year like the year 2001, which will be the next year that is common and starts and corresponds exactly on 2001? Because of leap years we can have a year starting on one of the seven days of the week and to be either leap or common. This means just 14 combinations. Following these observations we can refer to the following table. You can clearly see the pattern “6 4 2 0” Here’s the month table. Columns 2 and 3 differs only for January and February. Clearly the day table is as follows: Now let’s go back to the algorithm. Using these tables and applying a simple formula, we can calculate what day was on some given date. Here are the steps of this algorithm. Get the number for the corresponding century from the centuries table; Get the last two digits from the year; Divide the number from step 2 by 4 and get it without the remainder; Get the month number from the month table; Sum the numbers from steps 1 to 4; Divide it by 7 and take the remainder; Find the result of step 6 in the days table; Implementation First let’s take a look at a simple and practical example of the example above and then the code. Let’s answer the question from the first paragraph of this post. What day was on January 31st, 1883? Take a look at the centuries table: for 1800 – 1899 this is 2. Get the last two digits from the year: 83. Divide 83 by 4 without a remainder: 83/4 = 20 Get the month number from the month table: Jan = 0. Sum the numbers from steps 1 to 4: 2 + 83 + 20 + 0 = 105. Divide it by 7 and take the remainder: 105 mod 7 = 0 Find the result of step 6 in the days table: Sunday = 0. The following code in PHP implements the algorithm above. function get_century_code($century) { // XVIII if (1700 <= $century && $century <= 1799) return 4; // XIX if (1800 <= $century && $century <= 1899) return 2; // XX if (1900 <= $century && $century <= 1999) return 0; // XXI if (2000 <= $century && $century <= 2099) return 6; // XXII if (2100 <= $century && $century <= 2199) return 4; // XXIII if (2200 <= $century && $century <= 2299) return 2; // XXIV if (2300 <= $century && $century <= 2399) return 0; // XXV if (2400 <= $century && $century <= 2499) return 6; // XXVI if (2500 <= $century && $century <= 2599) return 4; // XXVII if (2600 <= $century && $century <= 2699) return 2; } /** * Get the day of a given date * * @param $date */ function get_day_from_date($date) { $months = array( 1 => 0,// January 2 => 3,// February 3 => 3,// March 4 => 6,// April 5 => 1,// May 6 => 4,// June 7 => 6,// July 8 => 2,// August 9 => 5,// September 10 => 0,// October 11 => 3,// November 12 => 5,// December ); $days = array( 0 => 'Sunday', 1 => 'Monday', 2 => 'Tuesday', 3 => 'Wednesday', 4 => 'Thursday', 5 => 'Friday', 6 => 'Saturday', ); // calculate the date $dateParts = explode('-', $date); $century = substr($dateParts[2], 0, 2); $year = substr($dateParts[2], 2); // 1. Get the number for the corresponding century from the centuries table $a = get_century_code($dateParts[2]); // 2. Get the last two digits from the year $b = $year; // 3. Divide the number from step 2 by 4 and get it without the remainder $c = floor($year / 4); // 4. Get the month number from the month table $d = $months[$dateParts[1]]; // 5. Sum the numbers from steps 1 to 4 $e = $a + $b + $c + $d; // 6. Divide it by 7 and take the remainder $f = $e % 7; // 7. Find the result of step 6 in the days table return $days[$f]; } // Sunday echo get_day_from_date('31-1-1883'); Application This algorithm can be applied in many different cases although most of the libraries have built-in functions that can do that. The only problem besides that is that there are much more efficient algorithms that don’t need additional space (tables) of data. However this algorithm isn’t difficult to implement and it gives a good outlook of some facts in the calendar.
April 24, 2012
by Stoimen Popov
· 61,340 Views · 1 Like
article thumbnail
Amazon EMR Tutorial: Running a Hadoop MapReduce Job Using Custom JAR
See original post at https://muhammadkhojaye.blogspot.com/2012/04/how-to-run-amazon-elastic-mapreduce-job.html Introduction Amazon EMR is a web service which can be used to easily and efficiently process enormous amounts of data. It uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. Amazon EMR removes most of the cumbersome details of Hadoop while taking care of provisioning of Hadoop, running the job flow, terminating the job flow, moving the data between Amazon EC2 and Amazon S3, and optimizing Hadoop. In this tutorial, we will use a developed WordCount Java example using Hadoop and thereafter, we execute our program on Amazon Elastic MapReduce. Prerequisites You must have valid AWS account credentials. You should also have a general familiarity with using the Eclipse IDE before you begin. The reader can also use any other IDE of their choice. Step 1 – Develop MapReduce WordCount Java Program In this section, we are first going to develop a WordCount application. A WordCount program will determine how many times different words appear in a set of files. In Eclipse (or whatever the IDE you are using), Create simple Java Project with the name "WordCount". Create a java class name Map and override the map method as follow, public class Map extends Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); @Override public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } Create a java class named Reduce and override the reduce method as shown below, public class Reduce extends Reducer { @Override protected void reduce(Text key, java.lang.Iterable values, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable value : values) { sum += value.get(); } context.write(key, new IntWritable(sum)); } } Create a java class named WordCount and defined the main method as below, public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setJarByClass(WordCount.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } Export the WordCount program in a jar using eclipse and save it to some location on disk. Make sure that you have provided the Main Class (WordCount.jar) during extraction ofu8u the jar file as shown below. Our jar is ready!!! Step 2 – Upload the WordCount JAR and Input Files to Amazon S3 Now we are going to upload the WordCount jar to Amazon S3. First, go to the following URL: https://console.aws.amazon.com/s3/home Next, click “Create Bucket”, give your bucket a name, and click the “Create” button. Select your new S3 bucket in the left-hand pane. Upload the WordCount JAR and sample input file for counting the words. Step 3 – Running an Elastic MapReduce job Now that the JAR is uploaded into S3, all we need to do is to create a new Job flow. let's execute the steps below. (I encourage readers to check out the following link for details regarding each step, How to Create a Job Flow Using a Custom JAR ) Sign in to the AWS Management Console and open the Amazon Elastic MapReduce console at https://console.aws.amazon.com/elasticmapreduce/ Click Create New Job Flow. In the DEFINE JOB FLOW page, enter the following details, a) Job Flow Name = WordCountJob b) Select Run your own applications) Select Custom JAR in the drop-down list) Click Continue In the SPECIFY PARAMETERS page, enter values in the boxes using the following table as a guide, and then click Continue.JAR Location = bucketName/jarFileLocationJAR Arguments =s3n://bucketName/inputFileLocations3n://bucketName/outputpath Please note that the output path must be unique each time we execute the job. The Hadoop always create a folder with the same name specified here. After executing the job, just wait and monitor your job that runs through the Hadoop flow. You can also look for errors by using the Debug button. The job should be complete within 10 to 15 minutes (can also depend on the size of the input). After completing the job, You can view results in the S3 Browser panel. You can also download the files from S3 and can analyze the outcome of the job. Amazon Elastic MapReduce Resources Amazon Elastic MapReduce Documentation,http://aws.amazon.com/documentation/elasticmapreduce/ Amazon Elastic MapReduce Getting Started Guide,http://docs.amazonwebservices.com/ElasticMapReduce/latest/GettingStartedGuide/ Amazon Elastic MapReduce Developer Guide,http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/ Apache Hadoop,http://hadoop.apache.org/ See more at https://muhammadkhojaye.blogspot.com/2012/04/how-to-run-amazon-elastic-mapreduce-job.html
April 23, 2012
by Muhammad Ali Khojaye
· 58,639 Views
article thumbnail
Face Detection using HTML5, Javascript, Webrtc, Websockets, Jetty and OpenCV
How to create a real-time face detection system using HTML5, JavaScript, and OpenCV, leveraging WebRTC for webcam access and WebSockets for client-server communication.
April 23, 2012
by Jos Dirksen
· 52,130 Views
  • Previous
  • ...
  • 767
  • 768
  • 769
  • 770
  • 771
  • 772
  • 773
  • 774
  • 775
  • 776
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: