DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

The Latest Coding Topics

article thumbnail
Groovy, Sometimes You Still Need a Semicolon.
Like Javascript, semicolons are optional in Groovy except for when they aren't optional. These examples are both pretty contrived, though I found both because they're actually something that I've written, and could both be written better. That's not really the point I'm making though. When something doesn't compile when it looks like it clearly should sometimes it's hard to track down why, and it's surprising to learn that it's because you need a semicolon. Example the first: Generics at the end of a line: def list = [1,2,3] as List println list If you try to compile this in Groovy it will give you the error message: 'unexpected token: println', however this: def list = [1,2,3] as List; println list Gives the expected output. Example the second: Ambiguous Closures {-> assert GroovyClosureTest == owner.getClass() }() {-> assert GroovyClosureTest == delegate.getClass() }() I don't think you'd really ever need to do something like this, but a closure can be defined and called on a single line. Because of Groovy's special closure parameter syntax (e.g. list.each() {} being synonomous with list.each({})) the compiler thinks I'm passing the second closure into the first as an argument. Again a semicolon is needed to seperate the two lines: {-> assert GroovyClosureTest == owner.getClass() }(); {-> assert GroovyClosureTest == delegate.getClass() }() From
November 6, 2009
by Scott Leberknight
· 31,300 Views · 1 Like
article thumbnail
Fluent Navigation in JSF 2
In this article, the third in a series covering JavaServer Faces (JSF) 2.0 features contributed by Red Hat, or which Red Hat participated in extensively, you'll discover that getting around in a JSF 2 application is much simpler and requires less typing. With improved support for GET requests and bookmarkability, which the previous article covered, JSF 2 is decidely more nimble. But not at the cost of good design. JSF no longer has to encroach on your business objects by requiring action methods to return navigation outcomes, but can instead reflect on the state of the system when selecting a navigation case. This article should give you an appreciation for how intelligent the navigation system has become in JSF 2. Read the other parts in this article series: Part 1 - JSF 2: Seam's Other Avenue to Standardization Part 2 - JSF 2 GETs Bookmarkable URLs Part 3 - Part 4 - Part 5 - Three new navigation variants are going to be thrown at you in this article: implicit, conditional and preemptive. These new options are a sign that the JSF navigation system is becoming more adaptable to the real world. There's also a touch of developer convenience thrown in. Implicit navigation is particularly useful for developing application prototypes, where navigation rules just get in the way. This style of navigation interprets navigation outcomes as view IDs. As you move beyond prototyping, conditional navigation removes the coupling between the web and transactional tier because the navigation handler pulls information from your business components to select a navigation case. Preemptive navigation, which you were introduced to in the last article, can use either implicit navigation or declarative navigation rules to produce bookmarkable URLs at render time. Leveraging the navigation system to generate bookmarkable URLs allows JSF to add GET support while maintaining consistent, centralized navigation rules. Even with these new options, there's no telling what requirements your application might have for navigation. Thus, in JSF 2, you can finally query and modify the navigation cases; a new API has been introduced in JSF 2 that exposes the navigation rule set. Before we get into customizations, let's find out how these new variants make the navigation system more flexible and help prepare the user's next move. Hopefully you won't need those customizations after all. Flexible navigation choices The declarative navigation model in JSF was a move away from the explicit navigation "forward" selection by the action in Struts. Navigation transitions in JSF, which get matched based on current view ID, logical outcome and/or action expression signature, are described in the JSF descriptor (faces-config.xml) using XML-based rules. The matched transition indicates the next view to render and whether a client-side redirect should proceed rendering. Here's a typical example: /guess.xhtml #{numberGuessGame.guess} correct /gameover.xhtml While the JSF navigation model is clearer and arguably more flexible than in Struts, two fundamental problems remain. First, the action method is still required to return a navigation directive. The directive just happens to be a more "neutral" string outcome rather than an explicit type (i.e., ActionForward), but the coupling is just as tight and you loose type safety in the process, so is it really an improvement? The other issue is that you must define a navigation case to match that outcome, even in the simplest cases, which can be really tedious. So you can't make the argument that the navigation model is less obtrusive or more convenient. It's just stuck somewhere in between. To sum it up, the JSF navigation model is not flexible enough. It needs to accommodate different development styles better and it needs to be more self sufficient. On the one hand, your style or development phase may dictate waiving the declarative navigation rule abstraction. On the other hand, you may want to completely decouple your business objects from the navigation model, eradicating those arbitrary return value directives. JSF 2 gives you this broad range of options, and even let's you settle for a happy medium. The first option is provided by implicit navigation and the second conditional navigation. With implicit navigation, you can even use the current model without having to define the navigation rule right away. Let's unbox these two new alternatives, starting with implicit navigation. Implicit navigation JSF will post a form back to the current view (using the POST HTTP method) whenever the user performs an action, such a clicking a command button (hence the term "postback"). In the past, the only way to get JSF to advance to another view after the action is invoked (i.e., following the Invoke Application phase) was to define a navigation case in faces-config.xml. Navigation cases are matched based on the EL signature of the action method invoked and the method's return value converted to a string (the logical outcome). To cite an example, assume the user clicks on a button defined as follows: The preview() method on the bean named commandHandler returns a value to indicate the outcome of processing: public String preview() { // tidy, translate and/or validate comment return "success"; } These two criteria are joined in a navigation case that dictates which view is to be rendered next. /entry.xhtml #{commentHandler.preview} success /previewComment.xhtml If no navigation case can be matched, all JSF knows to do is render the current view again. So without a navigation case, there is no navigation. A quick shorthand, which is present in Seam, is to have the action method simply return the target view ID directly. In this case, you're effectively treating the logical outcome value as a view ID. This technique has been adopted in JSF 2 as implicit navigation. It's improved since Seam because you can choose to drop the view extension (e.g., .xhtml) and JSF will automatically add it back on for you when looking for a view ID. Therefore, it's no more invasive than the string outcome values you are currently returning. Implicit navigation comes into play when a navigation case cannot be matched using the existing mechanism. Here's how the logic outcome is processed in the implicit navigation case: Detect the presence of the ? character in the logical outcome If present, capture the query string parameters that follow it the ? character The special query string parameter faces-redirect=true indicates that this navigation should be issued using a client-side redirect If the logical outcome does not end with a file extension, append file extension of current view ID (e.g., .xhtml) If the logical outcome does not begin with a /, prepend the location of current view id (e.g., /, /admin/, etc.) Attempt to locate the template for the view ID If the template is found, create a virtual navigation case that targets the resolved view ID If the template is not found, skip implicit navigation Carry out the navigation case If the navigation case is not a redirect, build and render the target view in the same request If the navigation case is a redirect, build a redirect URL, appending the query string parameters captured earlier, then redirect to it Implicit navigation can be leveraged anywhere a logical outcome is interpreted. That includes: The return value of an action method The action attribute of a UICommand component (e.g., ) The outcome attribute of a UIOutcomeTarget (e.g., ) The handleNavigation() method of the NavigationHandler API Here's an example of the navigation to the preview comment view translated into implicit navigation. The return value is automatically decorated with a leading / and a trailing .xhtml. public String preview() { // tidy, translate and/or validate comment return "previewComment"; } The /previewComment.xhtml view will be rendered in the same request. If you want to redirect first, add the following flag in the query string of the return value: public String preview() { // tidy, translate and/or validate comment return "previewComment?faces-redirect=true"; } You can accomplish any navigation scenario using implicit navigation that you can today with a formal navigation case defined in faces-config.xml. Implicit navigation is designed as the fall-through case (after the explicit navigation rules are consulted). If it fails (i.e., the template cannot be located), and the JSF 2 ProjectStage is set to development, a FacesMessage is automatically generated to warn the developer of a possible programming error. Implicit navigation is great for prototyping and other rapid development scenarios. The major downside of implicit navigation is that you are further tying your business objects into the navigation model. Next we'll look conditional navigation, which provides an alternative that keeps your tiers loosely coupled. Conditional navigation Implicit navigation spotlights how invasive it is to put the onus on your business object to return a logic outcome just to make JSF navigation happy (and work). This coupling is especially problematic when you want to respond to user interface events using components in your business tier, a simplified architecture that is supported by both Seam and Java EE 6 to reduce the amount of glue code without increasing coupling. What would be more "logical" is to invert the control and have the navigation handler consult the state of the bean to determine which navigation case is appropriate. The navigation becomes contextual rather than static. That's what conditional navigation gives you. Conditional navigation introduces a condition as a new match criteria on the navigation case. It's defined in the element as a child of and expressed using an EL value expression. The value expression is evaluated each time the navigation case is considered. For any navigation case that matches, if a condition is defined, the condition must resolve to true for the navigation case to be considered a match. Here's an example of a conditional navigation case: #{registration.register} #{currentUser.registered} /account.xhtml As you can see, the condition doesn't necessarily have to reference a property on the bean that was invoked. It can be any state reachable by EL. Conditional navigation solves a secondary problem with the JSF navigation model, one of those little annoyances in JSF that was tedious to workaround. In JSF 1.2 and earlier, if your action method is a void method or returns a null value, interpreted in both cases as a null outcome, the navigation is skipped entirely. As a result, the current view is rendered again. The only workaround is to override the navigation handler implementation and change the behavior. That really throws a wrench in being able to cut the glue code between your UI and transactional tier. That changes with the introduction of conditional navigation. Since the condition provides either an alternative, or supplemental, match criteria to the logical outcome, navigation cases that have a condition are consulted even when the logical outcome is null or void. When the outcome is null, you can emulate switch statement to match a navigation case, switching on the condition criteria: #{identity.login} #{currentUser.admin} /admin/home.xhtml #{identity.login} #{currentUser.vendor} /vendor/home.xhtml #{identity.login} #{currentUser.client} /client/home.xhtml If you intend to simply match the null outcome in any case, you can use a condition that is verily true (which, admittedly, could be improved in JSF 2.1): #{identity.logout} #{true} /home.xhtml You can also use this fixed condition to provide a fall-through case. But wait, there's more! Having to itemize all the possible routes using individual navigation cases causes death by XML (a quite painful death). What if you wanted to delegate the decision to a navigation helper bean or involve a scripting language? There's good news. You can! The target view ID can be resolved from an EL value expression. Let's return to the login example and use a helper bean to route the user using one navigation case: #{identity.login} #{navigationHelper.userHomeViewId} Oh my goodness, how much nicer is that? The navigation helper can encapsulate the logic of inspecting the currentUser bean and determining the correct target view ID. In this section, we looked at two additional ways a navigation case is matched, increasing the overall flexibility of the navigation model. Implicit navigation maps logical outcomes directly to view IDs and conditional navigation reflects on contextual data to select a navigation case without imposing unnecessary coupling with the transactional tier. We're still looking at the same fundamental navigation model, though. In the next section, you'll see the navigation model used in a new role, and in a new place in the JSF life cycle, to generate bookmarkable links. Anticipating the user's next move Up to this point, the navigation handler only comes into play on a postback. Since user interface events trigger a "postback" to the current view, as mentioned earlier, the navigation handler kicks in after the Invoke Application phase to route the user to the next view. JSF 2 introduces a completely new use of the navigation handler by evaluating the navigation rules during the Render Response phase. This render-time evaluation is known as preemptive (or predetermined) navigation. Preemptive navigation The spec defines preemptive navigation as a mechanism for determining the target URL at Render Response, typically for a hyperlink component. The current view ID and specified outcome are used to determine the target view ID, which is then translated into a bookmarkable URL and used as the hyperlink's target. This process happens, of course, before the user has activated the component (i.e., click on the hyperlink). In fact, the user may never activate the component. The idea is to marry the declarative (or implicit) navigation model with the support for generating bookmarkable links. Based on what was just described, you should now understand why you declare the target view ID in an attribute named outcome on the new bookmarkable component tags (and why those components inherit from a component class named UIOutcomeTarget). You are not targeting a view ID directly, but rather a navigation outcome which may be interpreted as a view ID if the matching falls through to implicit navigation. Let's consider an example. Assume that you want to create a link to the home page of the application. You could define the link using one the new bookmarkable link component: This definition would match the following navigation case if it existed: * home /home.xhtml Of course, with implicit navigation available, this navigation case would be redundant. We could exclude it and the result would be the same. Home But if the target view ID depends on the context, such as the user's credentials, you might choose to reintroduce the navigation case to leverage conditional logic as we did earlier. In either case, the key is that the target view ID is not hard-coded in the template. As it turns out, you've already been using preemptive navigation when you explored bookmarkability in the last article. But there's a critical part of preemptive navigation that we haven't yet fully explored: the assembly of the query string. As it turns out, this topic also applies to redirect navigation rules. In a sense, preemptive navigation has the same semantics as redirect navigation rules because both produces URL that lead to a non-faces request. The only difference is that a bookmarkable URL is a deferred request, whereas a redirect happens immediately. In both cases, the payload in the query string is an essential part of the URLs identity. Building the query string As a result of the new GET support in JSF 2, there are now a plethora of ways to tack on values to the query string. Options can collide when heading into the navigation funnel. What comes out on the other side? There's a simple conflict resolution algorithm to find out. Each parameter source is given a precedence. When a conflict occurs, meaning two sources define the same parameter name, the parameter from the source with the highest precedence is used. The query string parameters are sourced using the following order of precedence, from highest to lowest: Implicit query string parameter (e.g., /blog.xhtml?id=3) View parameter (defined in the of the target view ID) Nested in UIOutcomeTarget (e.g., ) or UICommand component (e.g., ) Nested within the navigation case element in faces-config.xml Granted, this appears to be a lot of options. Don't worry, we'll walk you through the cases in which you would use each option in this article. We recommend you choose a single style of providing navigation parameters that best suits your architecture and keep the others in the back of your mind, so that when an edge case comes up, you can tap into their power. In the last article, you learned that you can use view parameters to let JSF manage the query string for you. Instead of using view parameters, you could just tack on the query string yourself when building a link to a blog entry. You could even abstract the parameter away from the view and define it in the navigation case instead, but it again it presents a challenge to tooling: permalink /entry.xhtml?id=#{blog.entryId} A nested would also work here, especially if you want to centralized your parameters. In terms of navigation, the most important point to emphasize here is that you can finally add query string parameters to a redirect URL in the navigation rules. This need likely appears in your existing applications. No longer do you have to resort to using the programmatic API to issue a redirect with a query string payload. Let's consider the case of posting a comment to an entry. This example demonstrates the case when you are submitting a form and want to redirect to a bookmarkable page which displays the result of submitting the form: #{commentHandler.post} /entry.xhtml id #{blog.entryId} Note: Don't confuse with a UIViewParameter. Think of it more as a redirect parameter (the tag should probably be called not , something to address in JSF 2.1). There are now plenty of options to pass the user along with the right information. But the spec can't cover everything. That's why you can now query the navigation rule base at runtime to do with it what you like. Peeking into the navigation cases You've now seen a number of ways in which the navigation cases themselves have become more dynamic. Regardless of how dynamic they are, the fact remains that once you ship the application off for deployment, the navigation cases that you defined in faces-config.xml are set in stone. That's no longer the case in JSF 2. A new navigation handler interface, named ConfigurableNavigationhandler, has been introduced that allows you to query and make live modifications to the registered NavigationCase objects. Not that you necessarily want to make changes in production. Having a configurable navigation rule set means that you can incorporate a custom configuration scheme such as a DSL or even a fluent, type-safe navigation model from which rules can be discovered at deployment time. In short, the navigation rule set is pluggable, and it's up to you what to plug into it. NavigationCase is the model that represents a navigation case in the JSF API. When JSF starts up, the navigation cases are read from the JSF descriptor, encapsulated into NavigationCase objects and registered with the ConfigurableNavigationHandler. You can retrieve one of the registered NavigationCase objects by the action expression signature and logical outcome under which it is registered. NavigationCase case = navigationHandler.getNavigationCase( facesContext, "#{commandBoard.post}", "success"); You can also access the complete navigation rule set as a Map>, where the keys are the values. Map> cases = navigationHandler.getNavigationCases(); You can use this map to register your own navigation cases dynamically. For example, a framework might read an alternative navigation descriptor (such as Seam's pages descriptor) and contribute additional navigation cases. With an individual NavigationCase object in hand, you can either read its properties or use it to create an action or redirect URL, perhaps to feed into your own navigation handler. There are a lot of possiblities here. The slightly awkward part is how you reference this new API (ConfigurableNavigationHandler). The default NavigationHandler implementation in a standard JSF implementation must implement this interface. But you still have to cast to it when you retrieve it from the Application object, as follows: ConfigurableNavigationHandler nh = (ConfigurableNavigationHandler) FacesContext.getCurrentInstance() .getApplication().getNavigationHandler(); Obviously, something to revisit in JSF 2.1. Once you get a handle on it, the navigation model is your oyster. You can define new ways to navigate or use it to generate bookmarkable URLs in your own style. Forging ahead The JSF navigation model had the right idea in spirit, but lacked a couple of elements that would allow it to truly realize loose coupling, it's required use slowed down prototyping, and you had no control to query or modify the navigation rule set at runtime. Your going to find that in JSF 2, the navigation system is much more flexible. You could argue that it finally accomplishes its original goals. For prototype applications, you can get navigation working without touching the faces-config.xml descriptor with implicit navigation. Just use a view ID, with or without an extension, as the logical outcome and away you go. As the application matures, you can establish a clean separation between JSF and your transactional tier by using conditional navigation to select a navigation case. You can trim the number of navigation cases by defining the target view ID as a value expression and having JSF resolve the target view ID from a navigation helper bean. If the design of your application calls for bookmarkable support, you can leverage the navigation handler in its new role to produce bookmarkable URLs at render time. In JSF 2, it's a lot easier to route the user around the application. While that may be good for some applications, other applications never advance the user beyond a single page. These single page applications transform in place using Ajax and partial page updates. The next article in this series will open your eyes to how well Ajax and JSF fit together, and what new Ajax innovations made their way into the spec.
November 2, 2009
by Dan Allen
· 138,205 Views · 2 Likes
article thumbnail
JSF 2 GETs Bookmarkable URLs
JSR-314: JavaServer Faces (JSF) 2.0 demonstrates a strong evolution of the JSF framework driven by de facto standards that emerged out of the JSF community and participating vendor's products. This article, the second installment in covering JSF 2.0 features contributed by Red Hat, or which Red Hat participated in extensively, covers the new features that bring formalized GET support to a framework traditionally rooted in POST requests. The primary building blocks of this support are view parameters and a pair of UI components that produce bookmarkable hyperlinks. Both features incubated in Seam and, therefore, should be familiar to any Seam developer. They are also features for which the JSF community has passionately pleaded. Author's Note: Many thanks to Pete Muir, who played a pivotal role as technical editor of this series. Read the other parts in this article series: Part 1 - JSF 2: Seam's Other Avenue to Standardization Part 2 - JSF 2 GETs Bookmarkable URLs Part 3 - Part 4 - Part 5 - Every user session must start somewhere. JSF was designed with the expectation that the user always begins on a launch view. This view captures initial state and allows the user to indicate which action to invoke by triggering a UI event, such as clicking a button. For instance, to view a mortgage loan, the user might enter its id into a text box and then click the "Lookup" button. The assumption that this scenario is the norm is surprising since it overlooks that fact that the web was founded on the concept of a hyperlink. A hyperlink points to a resource (URI), which may already contain the original state and intent, such as to view a mortgage loan summary. There's no need to bother the user with a launch view in this case. While hyperlinks are most often used in web sites, they apply to web applications as well (see this blog entry for a discussion about the difference between a web site and a web application). Hyperlinks support reuse by serving as an exchange language in composite applications. One application can link to a resource in another application in lieu of having to duplicate its functionality. In fact, the request may be coming from a legacy application that isn't even web-based. In that case, you'll likely be plopping the user into the web application somewhere in the middle. As it turns out, this situation is quite common. When you visit a blog, do you start on the search screen to find an entry to read? Not likely. More times than not, you click on a link to view a specific blog entry. The point to take away from this discussion is that initial requests (referred to as non-faces requests in JSF) can be just as important as form submissions (faces requests), whether in a web site or web application. In the past, JSF has struggled to support the scenario cited above, placing much more emphasize on faces requests. JSF 2 rectifies this imbalance by introducing view parameters and hyperlink-producing UI components. View parameters allow the application to respond to a resource request by baking formal processing of request parameters into the JSF life cycle for both GET and POST requests. View parameters are not limited to consuming data. They are bi-directional. JSF 2 can propagate the data captured by view parameters when generating bookmarkable URLs, with complementary behavior for redirect URLs produced by redirect navigation cases. We'll start by examining view parameters, how they are defined and how they are worked into the JSF life cycle. You'll then discover how they work in tandem with the new hyperlink-producing components and the navigation handler to bring "bookmarkable" support to JSF. Introducing view parameters The API documentation describes a view parameter, represented by the javax.faces.component.UIViewParameter component class, as a declarative binding between a request parameter and a model property. The binding to the model property is expressed using an EL value expression (e.g., #{blog.entryId}). If the expression is omitted, the request parameter is bound instead to a request-scoped variable with the same name. Here's a simple example of a view parameter that maps the value of a request parameter named id to the JavaBean-style property named entryId on a managed bean named blog. Assuming the entryId property on the blog managed bean is of type long, a value of 9 will be assigned to the property when the following URL is requested: http://domain/blog/entry.jsf?id=9 But wait, there's more! The value of the request parameter is first converted and validated before being assigned to the model property. This behavior should sound familiar. That's because it mirrors the processing of form input bindings on a faces request. In a way, view parameters turn the query string into an alternative form submission. And like form inputs, view parameters are also processed during faces requests. The complete view parameter life cycle is covered later when we look at view parameter propagation Before going any further, it's important to point out that view parameters are only available when using the new View Declaration Language (VDL), a standardized version of Facelets. The primary reason is because the JSR-314 EG agreed that no new features should be made to support JSP since it's deprecated as a view handler in JSF 2. Perhaps you are thinking... Isn't this already possible? If you are savvy JSF developer, you're perhaps aware that it's already possible to map a request parameter value to a model property. The assignment is declared by referencing an element of the #{param} map in a element of a managed bean declaration. For instance, you could alternatively map the id request parameter to the blog managed bean in faces-config.xml as follows: blog com.acme.Blog entryId #{param['id']} The similiarities end there. View parameters go above and beyond this simple assignment by providing: View-oriented granularity (the property mapping in the managed bean definition is global to the application) Custom converters and/or validators (along with failure messages) Bi-directionality It's hard to say which feature is the most important, but bi-directionality is certainly the most unique. Since view parameters are a mapping to a JavaBean-style property, the value can be read from the property and propagated to the next request using either the query string or the UI component tree state (depending on the type of request). You are going to find out how useful this bi-directionality can be later on in the article. Suffice to say, while the property mapping in the managed bean definition works, it's pretty anemic. View parameters are far more adequate and robust in contrast. Pertaining to the topic in this article, view parameters are the key to bringing bookmarkable support to JSF. And since bookmarks link to specific views, so must view parameters. It's all in the view As you may have guessed, view parameters are view-oriented. That means they somehow need to be associated with one or more views (as opposed to being linked to a managed bean, for instance). Up to this point, however, there was no facility for associating extensible, non-rendering metadata with a JSF view. So the EG first had to find a place within the UI component tree to stick metadata like view parameters. That led to the introduction of the metadata facet of UIViewRoot. The next section will introduce this new facet and how it's used to host view parameters for a particular view, or even a set of views. Then we get into how view parameters get processed in the JSF life cycle. The view metadata facet View parameters provide information about how request parameters should be handled when a view is either requested or linked to. The view parameters are not rendered themselves. Therefore, we say that they are part of the view's metamodel and described using metadata. So the question is, "Where should this metadata live?" It turns out that a JSF view, which is represented at the root by the javax.faces.component.UIViewRoot component class, already accommodates some metadata. Currently, this metadata consists of string values to define settings such as the locale, render kit, and content type of the view, and method expressions that designate view-specific phase observers. For example: ... While values can be assigned to these metadata properties explicitly in Java code, more often they are assigned declaratively using corresponding attributes of the component tag. But neither UIViewRoot or it's component tag can accommodate complex metadata--that is, metadata which cannot be described by a single attribute. That's were the view metadata facet comes in. The view metadata facet is a reserved facet of UIViewRoot, named javax_faces_metadata, that can hold an arbitrarily complex branch of UI components that provide additional metadata for a view. Facets are special because they are ignored by a UI component tree traversal, requiring an imperative request to step into one of them. This aspect makes a facet an ideal candidate for tucking away some metadata for the view that can be accessed on demand. The view metadata facet looks like any other facet in the UI component tree. It must be declared as a direct descendant of within a view template as follows: ... ... Note: If you are using Facelets, you may not be familiar with the tag since it's optional in Facelets. When you add it to your template, it must be the outer-most component tag, but it does not have to be the root of the document. Since the view metadata facet is a built-in facet, and is expected to be heavily used, the alias tag was introduced as a shorthand for the formal facet definition shown above: ... ... We now have a place in the UI component tree to store metadata pertaining to the view. But why define the metadata in the view template? It's all about reuse and consistency. Describing view metadata with UI components There are two important benefits to defining the metadata within the view template. First, it circumvents introducing yet another XML file with its own schema that developers would have to learn. More importantly, it allows us to reuse the UI component infrastructure to define behavior, such as registering a custom converter or validator, or to extract common view parameters into an include template. Since we're using UI components to describe the view metadata, then it makes sense to treat the UIViewParameter like any other input component. In fact, it extends UIInput. That allows us to register custom converters and validators on a UIViewParameter without any special reservations. Here's an example: Note: Later in this series you'll learn that like input components, view parameters can enforce constraints defined by Bean Validation annotations (or XML), making the explicit validation tags such as this unnecessary. But there is one caveat to embedding the view metadata in the template. Without special provisions, extracting the metadata would require building the entire view (i.e., UI component tree). Not only would this be expensive and unnecessary if the intent is not to render the view, it could also have side effects. When the component tree is built, value expressions in Facelets tag handlers get evaluated, potentially altering the state of the system. To prevent these counteractions, the view metadata facet is given special treatment in the specification. Specifically, it must be possible to be extract and built it separately from the rest of the component tree. Earlier, I mentioned that view parameters are only available in Facelets, and not JSP, because of an executive decision. There's also a technical reason why view parameters rely on Facelets. Only Facelets can provide the necessary separation between template parsing and component tree construction that allows a template fragment to be processed in isolation. The result of this operation is a genuine UI component tree, represented by UIViewRoot, that contains only the view metadata facet and its children. For all intents and purposes, it's as though the view template only contained this one child element. Using the following logic, it's possible to retrieve the metadata for an arbitrary view at any point in time. This data mining will come in to play later when we talk about view parameter propagation. String viewId = "/your_view_id.xhtml" FacesContext ctx = FacesContext.getCurrentInstance(); ViewDeclarationLanguage vdl = ctx.getApplication().getViewHandler() .getViewDeclarationLanguage(ctx, viewId); ViewMetadata viewMetadata = vdl.getViewMetadata(ctx, viewId); UIViewRoot viewRoot = viewMetadata.createMetadataView(ctx); UIComponent metadataFacet = viewRoot.getFacet(UIViewRoot.METADATA_FACET_NAME); At this point you could retrieve the UIViewParameter components, which are children of the facet, to perhaps access the view parameter mappings. More likely, though, you'll be looking for your own custom components so you can execute custom behavior before the view is rendered (e.g., view actions). The extraction of the view metadata is very clever because, while it only builds a partial view, it still honors Facelets compositions. That means you can put your metadata into a common template and include it. Using some creative arrangement, you can apply common metadata to a pattern of views. Here's an example: ... ... You've learned that defining a view metadata facet provides the following services for JSF: Arbitrarily complex metadata, which can reuse existing component infrastructure Metadata is kept with the view, or in a shared template, instead of in an external XML file Can be extracted and processed without any side effects (idempotent) Common metadata declarations can be shared across multiple views Now that you are well versed in the view metadata facet, it's time to work out a concrete example of view parameters in practice. We'll look at how to enforce preconditions and load data on an initial request using information from the query string. Then you'll learn how that information gets propagated as the user navigates to other views. Weaving parameters into the life cycle This article has alluded several times to the use case of loading a blog entry from a URL by passing the value of the id parameter to our managed bean on an initial request. Let's allow this scenario to play out. Here's the URL the user might request coming into the site: http://domain/blog/entry.jsf?id=9 We'll start by asking what we do with the value once it is assigned to the entryId property of the blog managed bean. One approach is to load the entry lazily as soon as it's referenced in the UI. #{blog.blogEntry.title} #{blog.blogEntry.content} Here's what the managed bean would look like to support this approach: @ManagedBean(name = "blog") public class Blog { private Long entryId; private BlogEntry blogEntry; public Long getEntryId() { return entryId; } public void setEntryId(Long entryId) { this.entryId = entryId; } public BlogEntry getBlogEntry() { if (blogEntry == null) { blogEntry = blogRepository.findEntry(entryId); } return blogEntry; } } Of course, it doesn't make any sense to display an entry without an id (and could even lead to a NullPointerException). So we should really make the id request parameter required. We'll also add a message if it is missing. ... In the case a required request parameter is missing, you can display the error message using the tag. Conversion and validation failures are recorded as global messages since there's no view element with which to associate. But these preconditions still don't stop the view from being rendered if a request parameter is missing or invalid. What we need is a way to execute an initialization method that parallels an action invocation on a postback. That would allow us to get everything sorted before the user sees a response. View initialization While view parameters provide the processing steps from retrieving the request value to updating the model, they do not furnish the action invocation and navigation steps that are part of the faces request life cyle. That means you have to fall back to lazy loading the data as the view is being rendered (i.e., encoded). You are also missing a definitive point to fine tune the UI component tree programmatically before it's encoded. Fortunately, another new feature in JSF 2, system events, makes it possible to perform a series of initialization steps before view rendering begins. Systems events notify registered listeners of interesting transition points in the JSF life cycle at a much finer-grained level than phase listeners. In particular, we are interested in the PreRenderViewEvent, which is fired immediately after the component tree is built (but not yet rendered). If the word "registered" evokes dreadful memories of XML descriptors, fear not. Observing the event we are interested in is just a matter of appending one or more elements to the view metadata facet. The tag has two required attributes, type and listener. The type attribute is the name of the event to observe derived by removing the Event suffix from the end of the event class name and decaptializing the result. We are only interested in one event, preRenderView. The listener attribute is a method binding expression pointing to either a no-arguments method or a method that accepts a SystemEvent. ... We can use this method to retrieve the blog entry before the view is rendered. public void loadEntry() { blogEntry = blogRepository.findEntry(entryId); } If the entry cannot be found, you could add conditional logic to the view to display an error message: The blog entry you requested does not exist. Ideally, it would be better not to display the view at all. You can force a navigation to occur using the NavigationHandler API. public void loadEntry() { try { blogEntry = blogRepository.findEntry(entryId); } catch (NoSuchEntryException e) { FacesContext ctx = FacesContext.getCurrentInstance(); ctx.getApplication().getNavigationHandler() .handleNavigation(ctx, "#{blog.loadEntry}", "invalid"); } } The only problem is that the listener method is going to be invoked even if the view parameter could not be successfully converted, validated and assigned to the model property. Once again, there's a JSF 2 feature to the rescue. You can use the new isValidationFailed() method on FacesContext to check whether a conversion or validation failure occurred while processing the view parameters. public void loadEntry() { FacesContext ctx = FacesContext.getCurrentInstance(); if (ctx.isValidationFailed()) { ctx.getApplication().getNavigationHandler() .handleNavigation(ctx, "#{blog.loadEntry}", "invalid"); return; } // load entry } So far we have dealt with a trivial string to long conversion. But view parameters allow you to represent more complex data, as long as you have a converter that can marshal the value from (and to) a string. Let's assume that we want to allow the user to look at blog entries that fall within a range of dates. The before and after dates can be encoded into the URL as follows: /entries.jsf?after=2007-12-31&before=2009-01-01 Those values can then be converted to Date objects using the converter tag and assigned to Date properties on a managed bean as follows: We again use a PreRenderViewEvent listener to load the data before the page is rendered, in this case filtering the collection of blog entries to be displayed. Emulating the behavior of an action-oriented framework, which the previous examples have demonstrated, is one use of the PreRenderViewEvent. Another is to act as a life cycle callback for programmatically creating or tweaking the UI component tree after it is "inflated" from the view template. Perhaps you want to build part of the tree dynamically from a data structure. Accomplishing this in JSF would require "binding" a bean property to an existing UI component, declared using an EL value expression in the binding attribute of the tag. But this approach is really ugly because you have to put the tree-appending logic in either the JavaBean property getter or setter, depending on whether the view is being created or restored. The PreRenderViewEvent offers a much more definitive and self-documentating hook. As you've seen, it's finally possible to respond to a bookmarkable URL in JSF (without pain or brittle code). But, up to this point, all we've done is take, take, take. For bookmarkable support to be complete, we need to be able to create bookmarkable URLs. That brings us to the topic of parameter propagation. Push the parameters on If view parameters were only capable of accepting data sent through the query string of the URL, even considering the built-in conversion and validation they provide, they really wouldn't be all that helpful. What makes them so compelling is that they are bi-directional, meaning they are also propagated to subsequent requests, and rather transparently. The subsequent request may be a faces request, which targets the current view, or a non-faces request to a view that has view parameters, which translates into a bookmarkable URL. A request for a bookmarkable URL can come from either a link in the page or a redirect navigation event. We'll look at how view parameters get propagated in all of these cases in this section. Saved by the component tree Let's return to the blog entry view and consider what happens if we have a comment form below the post. The comment form might be defined as follows: Notice that there is no reference to the id of the blog entry in this form. Assuming that the blog entry is not stored in session scope (or a third-party conversation), how will the handler know which entry the comment should be linked? This is where view parameter propagation blends with component tree state saving. When encoding of the view (i.e., rendering) is complete, the view parameter values are tucked away in the saved state of the UI component tree. When the component tree is restored on a postback, such as when the comment form is submitted, the saved view parameter values are applied to the model. This allows view parameters to tie in nicely with the existing design of JSF. The initial state supplied to the view parameters by the URL can be maintained as long as the user interacts with the view (e.g., triggers faces requests through user interface events). You can think of view parameters as an elegant replacement for hidden form fields in this case. If the user bookmarked the URL after posting a comment, however, the reference to the blog entry would be lost. That's because after a POST request, the browser location does not contain a query string. Here's what the user would see: http://domain/blog/entry.jsf If we are following best practices, we'll want to implement the Post/Redirect/Get pattern anyway. That gives us a opportunity to repopulate the query string of the URL. In the past, this would have required an explicit call to the redirect() method of ExternalContext inside the action method. FacesContext.getCurrentInstance().getExternalContext().redirect("/entry.jsf?id=" + blog.getEntryId()); This explicit (and intrusive) call was necessary because the navigation case did not provide any way to append parameters to the query string. Now, view parameters can take care of this for us. We can tell JSF to encode the view parameters of the target view ID into the redirect URL by enabling the include-redirect-params attribute on the element. /entry.xhtml #{commentHandler.post} #{view.viewId} We'll get into navigation more in the next article in this series. Let's talk about those regular old hyperlinks in the page. We want those to be bookmarkable as well. That means the state needs to be encoded into the URL they point to. Once again, view parameters come into play. Bookmarkable links Let's now assume we want to create a bookmarkable link (permalink) to the current blog entry. You can link directly to another JSF view using the outcome attribute of the new hyperlink-producing component tags, and . These component tags are represented by the component class javax.faces.component.UIOutcomeTarget. (The reason the attribute is named outcome and not viewId will be explained in the next article. For now, just know that the value of the outcome attribute can be a view ID). Both of these component tags support encoding the view parameters into the query string of the URL as signaled by the includeViewParams attribute. Here's how the permalink is defined: The default value of includeViewParams is false. Since it's set to true, the view parameters are read in reverse and appended to the query string of the link. Here's the HTML that this component tag generated, assuming an entry id of 9: Permalink The link is produced using the new getBookmarkableURL() method on the ViewHandler API. This method calls through to the encodeBookmarkableURL() on the ExternalContext API to have the session ID token tacked on, if necessary. These methods complement the getRedirectURL() and encodeRedirectURL() methods on ViewHandler and ExternalContext, respectively. In a servlet environment, the implementations happen to be the same, but the extra methods serve as both a statement of intent and an extension point for environments where a link URL and a redirect URL are handled differently, such as a portlet. Notice that the context path of the application (/blog) is prepended to the path, the extension is changed from the view suffix (.xhtml) to the JSF servlet mapping (.jsf) and the query string contains the name and value of the view parameter read from the model. If you had used an tag, you would have had to do all of these things manually. That's exactly why the EG felt it was necessary to introduce this component. We can do one better. If the outcome attribute is absent, the current view ID is assumed. So we can shorten the tag to this: If you want the link to appear as a button, you can use the component tag instead. However, note that JavaScript is required in this case to update the browser location when the button is clicked, as you can see from the generated HTML: Permalink View parameters come in especially handy when the number of parameters to keep track of increases. For instance, let's consider the case when a user is searching for entries using a query string in a particular category and wants to paginate through the results. In this case, we are dealing with at least three parameters: Yet the link to these search results still remains as simple as the permalink to an entry: This component tag will produce HTML similar to this: Refresh What if we want to link back to the previous page? In that case, we cannot allow the view parameter named page be automatically written into the query string since that will just link us to the current page. We need an override. Fortunately, it's easy to override an encoded view parameter. You simply use the standard tag, just as you would if you were defining a new query string parameter: View parameters that are encoded into links to the current view ID are pretty intuitive. Where things get tricky is when we use view parameters on a link to a different view ID. This requires putting on your thinking cap and doing some reasoning. View parameter handoff When a request is made for a URL, and in turn a JSF view ID, the view parameters defined in that view are used to map request parameters to model properties. But when the view parameters are encoded into a bookmarkable URL, the mappings are read from the target view ID. That's why it's especially important to be able to extract the view metadata from a template without having building a full component tree, as mentioned earlier. Otherwise, you would end up building a component tree for every view that is linked to in the current view. That would be very costly. Let's consider a use case. Suppose that we want to create a link from the search results to a single entry. We would define the link as follows: #{_entry.title} #{_entry.excerpt} The question to ask yourself is this. "Are the search string, category and page offset included in the URL for the entry?". I hope you said "No". The reason is because when the URL for the entry link is built, the component tag reads the view parameter mappings defined in the /entry.xhtml template. The only parameter mapped in that template is entry id. In order to preserve the filter vector, the view parameters defined in the /entries.xhtml view need to also be in the /entry.xhtml template. Aha! Since these are shared view parameters, we should define them in a common template: We can then include that template in each view that needs to preserve these view parameters: ... Keep in mind that if the user navigates to the entry after performing a search, the URL for the entry shown in the browser's location bar will now contain the filter vector. But if you want the user to be able to return to the search filter (without using the back button), that's what you want. You can always provide a simple permalink to bookmark just the entry. Even though you are now defining view parameters in each of the views, that doesn't mean the URL will become littered with empty query string parameters when they are not in use. View parameters are only encoded (i.e., added to the query string) if the value is not null. Otherwise, there is no trace of the view parameter. You have now learned how view parameters are propagated during a postback, on a redirect and into a bookmarkable URL. The main benefit of this process is that it is transparent. You don't have to worry about each and every request parameter that comprises the state in the query string of the URL. Instead, JSF interprets the view parameter metadata defined in the template of the target view and automatically appends those name/value pairs to the URL when you activate this feature. Bookmark it View parameters serve as an alternative to storing state in the UI component tree, provide a starting point for the application, help integrate with legacy applications, assert preconditions of views, make views bookmarkable, and, with help of the new UIOutcomeTarget components or the enhancement to the redirect navigation case, produce links to those bookmarkable views. This article began by introducing you to the view metadata facet, which is a general facility for defining a view's metamodel that reuses the existing UI component infrastructure. You learned that view parameters and PreRenderViewEvent listeners are the first standard implementations of view metadata. You saw how the combination of these two features allow you to capture initial state from URL query string, validate preconditions and load data for a view all before the view is rendered. Finally, you learned how view parameter values are propagated to subsequent requests. This series continues by taking a deeper look at the navigation enhancements made in JSF 2 and explaining how those changes tie into the bookmarkability that you learned about in this article. So bookmark and check back again soon!
October 29, 2009
by Dan Allen
· 102,263 Views
article thumbnail
Fault Injection Testing - First Steps with JBoss Byteman
Fault injection testing[1] is a very useful element of a comprehensive test strategy in that it enables you to concentrate on an area that can be difficult to test; the manner in which the application under test is able to handle exceptions. It's always possible to perform exception testing in a black box mode, where you set up external conditions that will cause the application to fail, and then observe those application failures. Setting and automating (and reproducing) these such as these can, however, be time consuming. (And a pain in the neck, too!) JBoss Byteman I recently found a bytecode injection tool that makes it possible to automate fault injection tests. JBoss Byteman[2] is an open-source project that lets you write scripts in a Java-like syntax to insert events, exceptions, etc. into application code. Byteman version 1.1.0 is available for download from: http://www.jboss.org/byteman - the download includes a programmer's guide. There's also a user forum for asking questions here: http://www.jboss.org/index.html?module=bb&op=viewforum&f=310, and a jboss.org JIRA project for submitted issues and feature requests here: https://jira.jboss.org/jira/browse/BYTEMAN A Simple Example The remainder of this post describes a simple example, on the scale of the classic "hello world" example, of using Byteman to insert an exception into a running application. Let's start by defining the exception that we will inject into our application: package sample.byteman.test; /** * Simple exception class to demonstrate fault injection with byteman */ public class ApplicationException extends Exception { private static final long serialVersionUID = 1L; private int intError; private String theMessage = "hello exception - default string"; public ApplicationException(int intErrNo, String exString) { intError = intErrNo; theMessage = exString; } public String toString() { return "**********ApplicationException[" + intError + " " + theMessage + "]**********"; } } /* class */ There's nothing complicated here, but note the string that is passed to the exception constructor at line 13. Now, let's define our application class: package sample.byteman.test; /** * Simple class to demonstrate fault injection with byteman */ public class ExceptionTest { public void doSomething(int counter) throws ApplicationException { System.out.println("called doSomething(" + counter + ")"); if (counter > 10) { throw new ApplicationException(counter, "bye!"); } System.out.println("Exiting method normally..."); } /* doSomething() */ public static void main(String[] args) { ExceptionTest theTest = new ExceptionTest(); try { for (int i = 0; i < 12; i ++) { theTest.doSomething (i); } } catch (ApplicationException e) { System.out.println("caught ApplicationException: " + e); } } } /* class*/ The application instantiates an instance of ExceptionTest at line 18, then runs the doSomething method in a loop until a counter is greater then 10. Then it raises the exception that we defined earlier. When we run the application, we see this output: java -classpath bytemanTest.jar sample.byteman.test.ExceptionTest called doSomething(0) Exiting method normally... called doSomething(1) Exiting method normally... called doSomething(2) Exiting method normally... called doSomething(3) Exiting method normally... called doSomething(4) Exiting method normally... called doSomething(5) Exiting method normally... called doSomething(6) Exiting method normally... called doSomething(7) Exiting method normally... called doSomething(8) Exiting method normally... called doSomething(9) Exiting method normally... called doSomething(10) Exiting method normally... called doSomething(11) caught ApplicationException: **********ApplicationException[11 bye!]********** OK. Nothing too exciting so far. Let's make things more interesting by scripting a Byteman rule to inject an exception before the doSomething method has a chance to print any output. Our Byteman script looks like this: # # A simple script to demonstrate fault injection with byteman # RULE Simple byteman example - throw an exception CLASS sample.byteman.test.ExceptionTest METHOD doSomething(int) AT INVOKE PrintStream.println BIND buffer = 0 IF TRUE DO throw sample.byteman.test.ApplicationException(1,"ha! byteman was here!") ENDRULE Line 4 - RULE defines the start of the RULE. The following text on this line is not executed Line 5 - Reference to the class of the application to receive the injection Line 6 - And the method in that class. Note that since if we had written this line as "METHOD doSomething", the rule would have matched any signature of the soSomething method Line 7 - Our rule will fire when the PrintStream.println method is invoked Line 8 - BIND determince values for variables which can be referenced in the rule body - in our example, the recipient of the doSomething method call that triggered the rule, is identified by the parameter reference $0 Line 9 - A rule has to include an IF clause - in our example, it's always true Line 10 - When the rule is triggered, we throw an exception - note that we supply a string to the exception constructor Now, before we try to run this run, we should check the its syntax. To do this, we build our application into a .jar (bytemanTest.jar in our case) and use bytemancheck.sh sh bytemancheck.sh -cp bytemanTest.jar byteman.txt checking rules in sample_byteman.txt TestScript: parsed rule Simple byteman example - throw an exception RULE Simple byteman example - throw an exception CLASS sample.byteman.test.ExceptionTest METHOD doSomething(int) AT INVOKE PrintStream.println BIND buffer : int = 0 IF TRUE DO throw (1"ha! byteman was here!") TestScript: checking rule Simple byteman example - throw an exception TestScript: type checked rule Simple byteman example - throw an exception TestScript: no errors Once we get a clean result, we can run the application with Byteman. To do this, we run the application and specify an extra argument to the java command. Note that Byteman requires JDK 1.6 or newer. java -javaagent:/opt/Byteman_1_1_0/build/lib/byteman.jar=script:sample_byteman.txt -classpath bytemanTest.jar sample.byteman.test.ExceptionTest And the result is: caught ApplicationException: **********ApplicationException[1 ha! byteman was here!]********** Now that the Script Works, Let's Improve it! Let's take a closer look and how we BIND to a method parameter. If we change the script to read as follows: # # A simple script to demonstrate fault injection with byteman # RULE Simple byteman example - throw an exception CLASS sample.byteman.test.ExceptionTest METHOD doSomething(int) AT INVOKE PrintStream.println BIND counter = $1 IF TRUE DO throw sample.byteman.test.ApplicationException(counter,"ha! byteman was here!") ENDRULE In line 8, the BIND clause now refers to the int method parameter by index using the syntax $1. This change makes the value available inside the rule body by enabling us to use the name "counter." The value of counter is then supplied as the argument to the constructor for the ApplicationException class. This new version of the rule demonstrates shows how we can use local state as derived from the trigger method to construct our exception object. But wait there's more! Let's use the "counter" value as a counter. It's useful to be able to force an exception the first time a method is called. But, it's even more useful to be able to force an exception at a selected invocation of a method. Let's add a test for that counter value to the script: # # A simple script to demonstrate fault injection with byteman # RULE Simple byteman example 2 - throw an exception at 3rd call CLASS sample.byteman.test.ExceptionTest METHOD doSomething(int) AT INVOKE PrintStream.println BIND counter = $1 IF counter == 3 DO throw sample.byteman.test.ApplicationException(counter,"ha! byteman was here!") ENDRULE In line 9, we've changed the IF clause to make use of the counter value. When we run the test with this script, the first 2 calls to doSomething succeed, but the third one fails. One Last Thing - Changing the Script for a Running Process So far, so good. We've been able to inject a fault/exception into our running application, and even specify which iteration of a loop in which it happens. Suppose, however, we want to change a value in a byteman script, while the application is running? No problem! Here's how. First, we need to alter our application so that it can run for a long enough time for us to alter the byteman script. Here's a modified version of the doSomething method that waits for user input: public void doSomething(int counter) throws ApplicationException { BufferedReader lineOfText = new BufferedReader(new InputStreamReader(System.in)); try { System.out.println("Press "); String textLine = lineOfText.readLine(); } catch (IOException e) { e.printStackTrace(); } System.out.println("called doSomething(" + counter + ")"); if (counter > 10) { throw new ApplicationException(counter, "bye!"); } System.out.println("Exiting method normally..."); } If we run this version of the application, we'll see output like this: Press called doSomething(0) Exiting method normally... Press called doSomething(1) Exiting method normally... Press called doSomething(2) Exiting method normally... caught ApplicationException: **********ApplicationException[3 ha! byteman was here!]********** Let's run the application again, but this time, don't press . While the application is waiting for input, create a copy of the byteman script. In this copy, change the IF clause to have a loop counter set to a different value, say '5.' Then, open up a second command shell window and enter this command: Byteman_1_1_0/bin/submit.sh sample_byteman_changed.txt Then, return to the first command shell window and start pressing return, and you'll see this output: Press redefining rule Simple byteman example - throw an exception called doSomething(0) Exiting method normally... Press called doSomething(1) Exiting method normally... Press called doSomething(2) Exiting method normally... Press called doSomething(3) Exiting method normally... Press called doSomething(4) Exiting method normally... caught ApplicationException: **********ApplicationException[5 ha! byteman was here!]********** So, we were able to alter the value in the original byteman script, without stopping the application under test! Pitfalls Along the Way Some of the newbee mistakes that I made along the way were: Each RULE needs an IF clause - even if you want the rule to always fire The methods referenced in a RULE cannot be static - if they are static, then there is no $0 (aka this) to reference Yes, I had several errors and some typos the first few times I tried this. A syntax checker is always my best friend. ;-) Closing Thoughts With this simple example, we're able to inject injections into a running application in an easily automated/scripted manner. But, We've only scratched the surface with Byteman. In subsequent posts, I'm hoping to explore using Byteman to cause more widespread havoc in software testing. References [1] http://en.wikipedia.org/wiki/Fault_injection [2] http://www.jboss.org/byteman (Special thanks to Andrew Dinn for his help! ;-)
October 16, 2009
by Len DiMaggio
· 17,102 Views
article thumbnail
Integrating JBoss RESTEasy and Spring MVC
Building websites is a tough job. It's even tougher when you also have to support XML and JSON data services. Developers need to provide increasingly sophisticated AJAXy UIs. Marketing groups and other business units are becoming more savvy to the benefits of widgets and web APIs. If you're a Java developer who needs to implement those sexy features, you're likely going to accomplish that work with a dizzying variety of frameworks for web development, data access and business logic. The Spring Framework has a strong presence based on the premise of seamless (no pun intended) integration of all of those frameworks. The Spring framework integrates with a host of JEE standard technologies, such as EJBs and JSP. Spring MVC is a sub-project of the larger Spring Framework that has its own Controller API and also integrates other web development frameworks such as JSF, Struts and Tiles. While the Spring Framework also integrates with new JEE technologies as they develop, however, for a variety of reasons the Spring framework has not integrated with the tour de force JAX-RS standard which delivers an API for constructing RESTful services. There are six implementations of the JAX-RS standard, and each provides some level of integration with Spring. Most of those integrations work with the Spring framework proper, but don't take advantage of the benefits of Spring MVC. JBoss RESTEasy integrates with both the Spring Framework proper and also with the Spring MVC sub-project. In this article, we're going to explore how to use RESTEasy along with Spring MVC application. We'll deep dive into the internals of Spring MVC, we'll discuss JAX-RS and how they related to MVC web development. We'll also touch on quite a few technologies beyond Spring MVC and RESTEasy, including Jetty and maven. We're also going to discuss theoretical concepts relating to REST and Dependency Injection. This article has to cover quite a bit of ground and you'll be gaining quite a few tools you can use to develop complex web applications. If you follow this article, you'll be constructing an end-to-end web application, however, feel free to skim the article to find material that's relevant to you. REST and JAX-RS REST has been an increasingly trendy topic over the last three years. We as a development community have been looking at REST as an effective way to perform distributed programming and data-oriented services. In fact, the Java community's REST leaders got together and created a standard spec to standardize some RESTful ideas in JSR 311 - JAX-RS the Java API for XML and RESTful Services. The focus of JAX-RS was to create an API that Java developers could use to perform RESTful data exchanges. However, the Java community quickly saw the similarities between JAX-RS and MVC (Model View Control) infrastructures. James Strachan, a long time Java community member and open source contributor (to things like DOM4J, Groovy - he created the language, and recently the Apache Camel and CXF ESBs) suggested that JAX-RS as the one Java web framework to rule them all?. Jersey, the production ready JAX-RS reference implementation, has a built in JSP rendering mechanism. The RESTEasy community built a similar mechanism in HTMLEasy. The Jersey and and HTMLEasy approaches work well for simpler websites, but they don't solve some of the more complex needs of an application. If you want more complex functionality, you'll need a more sophisticated web-development platform, such as Spring MVC. A combination of Spring MVC and RESTEasy will have the following benefits compared to the simpler approaches: Session based objects Freedom of choice - chose the right tool for the job Spring MVC integrates with a whole bunch of MVC frameworks, including Spring MVC, Struts2 and now RESTEasy Spring MVC integrates with a whole bunch of View frameworks, including JSP, JSF, Tiles and much more Integrated AJAX components - the freedom of choice can make end-to-end AJAX calls a breeze, assuming you chose the appropriate framework More control over URL mapping This article tackles some more advanced topics. If you want some relevant background, we have a reference section at the end of this article. Before we take a look at code, let's take a more in depth view of Spring MVC. Spring MVC Spring MVC is broken down into three pluggable sub-systems: Handler Mapping - Map a URL to Spring Bean/Controller. Spring allows quite a few methods to perform this mapping. It can be based on the name of a Spring bean, it could be a URL to bean map, it could be based on an external configuration file or it could be based on annotations. Handler Mappings allow you to configure complex mappings without resorting to complex web.xml files. Handler Adapter - Invoke the Controller. Hander Adapters know what type of spring beans they can call and performs invocations on the types of beans it knows about. There are Handler Adapters for Spring MVC classic, spring @MVC, Struts, Struts2, Servlets, Wicket, Tapestry, DWR and more. View Mapping - Invoke the View. View Mappers know how to translate a logical view name produced by a Controller into a concrete View implementation. A name like "customers" may translate into any of the following technologies: JSP/JSTL, JSF, Velocity, FreeMarker, Struts Tiles, Tiles2, XSLT, Jasper Reports, XML, JSon, RSS, Atom, PDF, Excel, and more RESTEasy plugs into Spring MVC in all three sub-systems. JAX-RS Resources/Controllers are defined by annotations; therefore RESTEasy provides a ResteasyHandlerMapper that knows how to convert a URL to a RESTEasy managed JAX-RS controller method. Once RESTEasy determines which method to invoke, the ResteasyHandlerMapping performs the invocation. The invocation can either be an object, which invokes the default JAX-RS behavior which transforms the resulting Object to a Represetation such as XML or JSON. Additionally, you return a traditional Spring ModelAndView which can refer to a logical view name and a map of data to be rendered by the View. The default JAX-RS behavior creates a ResteasyView which uses JAX-RS's configurable MessageBodyReader and MessageBodyWriter transformation framework. RESTEasy can produce XML and JSON using JAXB, but can be configured to use other view technologies such as Jackson, which is a performant and flexible JSON provider, Flex AMF. This separation of Controller and View concepts allows you to mix and match your Controller and View technologies. RESTEasy Resources can call any Spring managed Views and other Controller technologies can be rendered by a ResteasyView. You can either use RESTEasy as your sole MVC framework, if it fits your needs, or you can augment an existing Controller infrastructure with data services provided by RESTEasy. Just as importantly, you can leverage all of the other functionality that Spring provides, such as DAO abstraction, transaction management and AOP. Your First SpringMVC/RESTEasy Application Before we start reviewing the project, let's review a quick checklist of items we will be reviewing. The project files fall into two categories: configuration and source code. All of the code that will be covered is available in the RESTEasy repository and can be downloaded (as a tar.gz file), or browsed. Here is a list of files that each category will require. Configuration Files: web.xml - servlet configuration with Spring MVC artifacts - Spring MVC's DispatcherServlet, and map to /contacts/* springmvc-servlet.xml - a Spring application context configuration with all of the Spring beans this project needs, including RESTEasy setup (one line) and JSP configuration pom.xml - maven 2 dependency configuration, including required repositories, RESTEasy dependencies and embedded Jetty setup Source Code: The code we're going to show you can be broken down into four layers: Controller - Controlling the flow between the HTTP request, the Model and the View ContactsResource.java - a RESTful Controller with JAX-RS annotations and some traditional HTML controller methods. It will be annotated with Spring's @Controller and @Autowired annotations as well. Model - the domain model and service objects in our case. In our case, we have 2 domain objects: Contact and Contacts; and 1 Service object: ContractService Contact.java - a JAXB domain object with contact information Contacts.java - a JAXB wrapper object that wraps a Contact List ContactService.java - a Map based repository of Contact instances View - How the domain model is transformed for consumer use. JAX-RS performs automated conversion to XML based on annotations on our domain model. We'll be using JSP for object to HTML conversion. contacts.jsp - a bare bones HTML view of our Contacts Test - JAX-RS provides quite a bit of functionality, we're writing quite a bit of code, and all of that is wrapped in quite a bit of configuration. This article will focus on testing our code, configuration and deployment in an automated JUnit test. ContractTest.java - a RESTEasy ReSTful Unit test for the ContractsResource functionality, embedded server included There's a lot of ground to cover, and we'll cover the most interesting pieces of the source first. Our first pass will cover the web.xml and springmvc-servlet.xml configuration files as well as the ContactsResource.java and ContactTest.java source files. Our second pass will cover the remaining topics. Core RESTEasy/Spring MVC artifacts web.xml Spring MVC's entry point is the DispatcherServlet. There are two parts in setting up the DispatcherServlet, the first is to map the servlet to the URL pattern which must follow the rules specified in Section 11.2 of Servlet API Specification. The next step is configure the behavior of the the servlet by providing the configuration file which we call 'springmvc-servlet.xml'. By default, DispatcherServlet looks for a configuration file at "WEB-INF/{servlet-name}-servlet.xml" to find it's configuration, but we're going to use a Spring configuration from the classpath so that the configuration can be reused later in our junit test case. springmvc org.springframework.web.servlet.DispatcherServlet contextConfigLocation classpath:springmvc-servlet.xml springmvc /contacts/* All requests will be forwarded to the Spring MVC DispatcherServlet. One can get much more sophisticated, but this is one of the simplest simplest web.xml you can create to integrate with Spring. Note that there isn't any reference here to a RESTEasy servlet. Other JAX-RS/Spring integrations require you to have an implementation specific Servlet to serve XML or JSon and a separate Spring MVC DispatcherServlet mapping to server HTML requests. RESTEasy integrates with DispatcherServlet to allow Spring MVC to direct the URL to either RESTEasy Resources or Spring MVC Controllers. Next, let's take a look at the Spring MVC configuration. springmvc-servlet.xml In our basic project, there are five things we need to do in the springmvc-servlet.xml file. Register the Spring namespace Register the package(s) to scan for Spring MVC annotations. Configure the context annotation processor. Import the springmvc-resteasy.xml configuration file which specifies the default RESTEasy/Spring MVC integration Spring beans. Configure the ViewResolver bean to configure the presentation layer to use JSP. Let's inspect the springmvc-servlet.xml file and focus on each of the above items. The springmvc-servlet.xml file itself is pretty short and shows off some of the features from Spring 2.5: Demystifying the Spring Configuration Spring allows for custom namespaces to reduce the verbosity of the configuration files. We make use of the namepsace by registering it in lines 3 and 4. Line 6 (component-scan) informs spring about which package(s) we want to scan and create the custom component object instances, such as controllers and service objects. We tell Spring about the packages we're interested in by using the custom namespace and set the base-package attribute with the packages we're interested in (org.jboss.resteasy.examples.springmvc ). Later on, you'll see that we're going to be using two Spring annotations that allow the Spring runtime to glean Dependency Management information directly from the object itself: @Controller and @Service. Line 7 (annotation-config) tells Spring that our application will be using annotations on how to configure the beans created by the (component-scan) operation of line 6. Spring looks for annotations such as Spring's @Autowired and @Required; JEE's @Resource; and JPA's @PersistenceContext and @PersistenceUnit to describe dependencies between bean instances. annotation-configSpring also looks for life-cycle annotations such as JSR 250's @PostConstruct and @PreDestroy. Our environment requires a dependency between our Controller and Service objects, and the annotation-config declaration will assist us to configure that relationship in Java code. Line 8 (import) is all the XML that is necessary to configure a RESTEasy environment in Spring MVC. The nice thing about the integration with RESTEasy is that most of the configuration is done for you within an embedded configuration file called springmvc-resteasy.xml. Lines 9-14 tell Spring how we intend to handle the rendering of our presentation layer. In our case, we want to use JSTL views that translate view names (such as "contact") to a JSP page found in the /WEB-INF/ directory (specifically /WEB-INF/contacts.jsp in our case). For more information about setting up Spring views, take a look at the spring documentation. Next, let's take a look at how you can mix and match Spring and JAX-RS annotations in a Controller/Resource. ContactsResource.java MVC Controllers control the flow between the Model and the View. Resource is REST's equivalent to Controllers, and we'll be using the term Resource and Controller interchangably. In our case, our resource handles requests to /contracts and /contracts/{id}. Our ContractsResource must perform quite a few functions on those two URL templates: Retrieve all Contacts - Display the results in either HTML, XML or JSon. For clarity, we'll break out the data oriented functionality (XML and JSon) from the user oriented functionality (HTML) into two distinct URLs - /contacts for HTML and /contacts/data for XML and JSon. REST allows a client to select which format it prefers to receive the data in through a process called Content Negotiation. Content Negotiation can happen through HTTP headers, URI or query parameters. Our ContractsResource will use different URIs to differentiate between data oriented and user views, and will use HTTP headers to differentiate between XML and JSon data views. Save a Contact - Create or Updating data is a pretty standard requirement. The Save a Contact functionality mirrors the Content Negotiation needs of Retrieve all Contacts. User oriented data exchange comes in the form of HTML form data, and data oriented exchange usually occurs in XML and JSon. These differing requirements require ContractResource to have two distinct JAX-RS Java methods; we'll also separate the URLs for clarity purposes. View a Contact - We'll create a single view for viewing a single contact that returns XML or JSon. We leave the user oriented view as an exercise for the reader. Here's another view of our requirements: Functionality URL Format Java Method User Oriented View All /contacts HTML viewAll() Data Oriented View All /contacts/data XML or JSon getAll() User Oriented Save /contacts/ Form data saveContactForm() Data Oriented Save /contacts/data XML or JSon saveContact() Data Oriented View Single /contacts/data/{lastName} XML or JSon get() Note that we mixed and matched HTML and data oriented functionality in this requirement. Now that we have our requirements in place, let's take a look at the ContactsResource code. There are quite a few new Spring and JAX-RS annotations which we'll explain right after the code: @Controller @Path(ContactsResource.CONTACTS_URL) public class ContactsResource { public static final String CONTACTS_URL = "/contacts"; @Autowired ContactService service; @GET @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Path("data") public Contacts getAll() { return service.getAll(); } @GET @Produces(MediaType.TEXT_HTML) public ModelAndView viewAll() { // forward to the "contacts" view, with a request attribute named // "contacts" that has all of the existing contacts return new ModelAndView("contacts", "contacts", service.getAll()); } @PUT @POST @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Path("data") public Response saveContact(@Context UriInfo uri, Contact contact) throws URISyntaxException { service.save(contact); URI newURI = UriBuilder.fromUri(uri.getPath()).path(contact.getLastName()).build(); return Response.created(newURI).build(); } @POST @PUT @Consumes(MediaType.APPLICATION_FORM_URLENCODED) @Produces(MediaType.TEXT_HTML) public ModelAndView saveContactForm(@Form Contact contact) throws URISyntaxException { service.save(contact); return viewAll(); } @GET @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Path("data/{lastName}") public Contact get(@PathParam("lastName") String lastName) { return service.getContact(lastName); } } This code is packed with annotations and Java code that's indicative of JAX-RS Resources and Spring applications. There is also a RESTEasy custom annotation. The Spring IoC annotations are well documented, but we are using them in unusual ways for our integration: @Controller tells the Spring runtime that it needs to create an instance of ContractsResource at startup time. Do you remember the component-scan directive that was used in the Spring configuration section? The combination of the directive and the annotation tell Spring that a singleton instance of ContractsResource must be created at startup. Spring has a more generic @Component, but the use of @Controller allows for more precise definition of bean usage and also allows for future upgrades that involve AoP to create more precise targeting. While @Controller is usually associated with Spring @MVC annotated controllers and not other Controller infrastructures, but even thought it's not a Spring MVC controller, we use it to tell Spring that this indeed is a Controller That association of @Controller to Spring MVC annotated controllers is a loose coupling in the Spring runtime. We'll use JAX-RS annotations to configure the handling of URL and HTTP handling behavior. You could theoretically add additional Spring @MVC annotations such as @RequestMapping (which is an equivalent of JAX-RS @Path) to our ContactsResource, if you really wanted to @Autowired tells the Spring runtime that instances of ContractResource require an instance of ContactService. We'll be coding the ContactService later in this article. You can take a look at the Spring reference documentation for more information about @Autowired and @Controller. The last Spring artifact that we use is ModelAndView: It is Spring MVC's encapsulation of which logical View to use and which Model variables should be passed into the View. In our case, we're going to create a Model variable called "contacts" that is a List of all Contact objects we have in the system. We're passing that variable to the a logical view named "contacts" which will map to "/WEB-INF/contacts.jsp" based on the Spring configuration that we previously discussed. The JAX-RS annotations are also well documented, but it's definitely worth while to give a brief overview: @Path tells the RESTEasy (or other JAX-RS environments) how to map URLs to java methods. Adding @Path at the class level tells, in our case "/contacts", indicates that all methods must be prefixed with that URL. The @Path value can either be a hard coded URL such as "/contacts" or it can be a URI template such as "data/{lastName}". You can even specify regular expressions for more sophisticated filtering in the URI template. @GET, @PUT and @POST are used in combination with @Path to indicate which specific HTTP methods are handled by individual Java methods @Produces and @Consumes are used to further filter how a request should be handled based on content negotiation based on the Accept and Content_Type HTTP header. JAX-RS provides a set of default mime type values in the MediaType class. @PathParam is a method parameter annotation that indicates how a URI template variable is mapped to a method parameter. There are quite a few other method parameter level annotations that you could use to map HTTP headers, cookies, query parameters and form parameters to member variables @Context is an interesting JAX-RS parameter that allows dependency injection of request level information such as HttpRequest, HttpResponse and UriInfo (which as you can probably guess encapsuldates information about the request URI). It's important to note that Spring by default manages beans such as ContactsResource as a singleton; if ContactsResource was a Prototype or Request scoped bean, you would be able to use the @Context annotation on member variables in addition to method variables. For more on Spring scoping see the Spring Framework documentation. The last annotation we need to talk about is @Form. It's a RESTEasy custom annotation that describes that a member variable encapsulates data from HTML forms. If you recall, we used the JAX-RS @FormParam annotation on our Contact domain object. @Form and @FormParam are used in concert to allow for better maintenance of form based processing systems. JAX-RS 2.0's stated goals include a more robust, uniform Form processing annotation system. The functionality to code ratio is pretty high because of all of the declarative coding conventions of these annotations. Now that we've discussed the most involved pieces of the puzzle, let's take a look at completing the project. Additional Artifacts pom.xml Our pom.xml includes dependency management, description of required Maven repositories, a description of which JDK we're going to use and a Jetty web server configuration. We'll cover the repository selection, the dependencies specific to RESTEasy and a jetty-maven integration External Repositories jboss jboss repo http://repository.jboss.org/maven2 scannotation http://scannotation.sf.net/maven2 java.net http://download.java.net/maven/1 legacy maven repo maven repo http://repo1.maven.org/maven2/ Project Dependencies Now that we've informed Maven which additional repositories are required, we can now include the dependencies the our sample project will require. The section of the pom.xml file, should include the following two dependencies for Spring and RESTEasy functionality - resteasy-spring and resteasy-jaxb-provider: org.jboss.resteasy resteasy-spring 1.2.RC1 org.jboss.resteasy resteasy-jaxb-provider 1.2.RC1 org.mortbay.jetty maven-jetty-plugin 6.1.15 test The resteasy-spring dependency includes the adapter that integrates RESTEasy into Spring's MVC and provides most the required Java dependencies for RESTEasy and Spring. It also contains Spring configuration needed within the embedded spring-resteasy.xml file that will be used in the Spring configuration section. The other RESTEasy dependency that's included, resteasy-jaxb-provider, contains classes that convert the payload into various formats before sending it to the client. The last dependency to focus on is the maven-jetty-plugin which allows us to easily startup our project in a Jetty webserver environment. Note: If you're follow the link above to the RESTEasy repository's version of pom.xml, you will have to modify the version of resteasy-spring and resteasy-jaxb-provider to the latest version that has been deployed, specifically 1.2.RC1 at the time this article was written. The RESTEasy repository contains a soon-to-be-deployed version number which will not work unless you build the entire RESTEasy project. Maven Jetty Plugin One last interesting item of pom.xml is the configuration of the Jetty web server resteasy-springMVC org.mortbay.jetty maven-jetty-plugin 6.1.15 / 2 ... This will allow us to startup Jetty against localhost:8080. You can learn more about the maven Jetty plugin and a variety of configuration options. Let's start with the domain model and move on to the service object. From there, we'll discuss the JAX-RS Resource/Controller. From there, we'll explore the unit test and finally we'll write the JSP View and start up our server. Contact.java Our DTO is going to be deceptively simple. It will perform a dual responsiblity of JAXB XML binding and Form parameter binding. Both sets of functionality will be configured through annotations and will be managed through JAXB and JAX-RS: import javax.ws.rs.FormParam; import javax.xml.bind.annotation.XmlRootElement; @XMLRootElement public class Contact { private String firstName, lastName; // default constructor for JAXB (also required by JPA/Hibernate if you use them) public Contact(){} // helper constructor for our Controller/Service operations public Contact(String firstName, String lastName){ this.firstName = firstName; this.lastName = lastName; } @FormParam("firstName") public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } @FormParam("lastName") public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } // equals and hasCode are added for the Map based Service object public boolean equals(Object other){ .. } public long hashCode(){ .. } } The annotation on the setters tells JAX-RS to bind any incoming form parameters to the appropriate setter. The @XMLRootElement annotation is enough to tell JAXB that the Contract object must be bound to getters and setters must be bound to an XML document that will look like: Richard Burton Contacts.java The Contacts class is a simple wrapper around a List of Contact instances: @XmlRootElement public class Contacts { private Collection contacts; public Contacts() { this.contacts = new ArrayList(); } public Contacts(Collection contacts) { this.contacts = contacts; } @XmlElement(name="contact") public Collection getContacts() { return contacts; } public void setContacts(Collection contact){ this.contacts = contact; } } Contacts has the @XmlRootElement, just like Contact. The @XmlRootElement annotation tells JAXB to transform objects of this type to an XML structure that has as its top level element. In addition, we've added the @XmlElement annotation to the getContacts() method. By default, JAXB renders all JavaBean elements and uses the JavaBean name as the element. JAXB handles Lists as special cases: all List elements are translated to XML elements using the JavaBean name. @XmlElement(name="contact") tells JAXB that we opted to override the default name ("contracts") in favor of our own name ("contract" - no 's'). The Contracts object will bind to XML that looks like: Richard Burton Solomon Duskis Now that we have our Domain model in place, let's start using it in our Service tier. ContactService.java Since the purpose of this article is JAX-RS centric, we're not going to create an elaborate service layer, but we'll add once since creating more robust Spring applications do require service or data access layers. If you're interested in seeing a RESTEasy/Spring application with database access, look here. Our ContractService performs simple in-memory storage of Contacts by last name: @Service public class ContactService { private Map contactMap = new ConcurrentHashMap(); public void save(Contact contact){ contactMap.put(contact.getLastName(), contact); } public Contact getContact(String lastName){ return contactMap.get(lastName); } public Contacts getAll() { return new Contacts(contactMap.values()); } } There are two items of interest that are noteworthy: Notice the use of Spring's @Service annotation. Do you remember the component-scan directive that was used in the Spring configuration section? The combination of the directive and the annotation tell Spring that a singleton instance of ContractService must be created at startup. Spring has a more generic @Component, but the use of @Service allows for more precise definition of bean usage and also allows for future upgrades that involve AoP to create more precise targeting. Notice the use of ConcurrentHashMap. It's a JDK 1.5 addition that adds performance in multi-threaded environments. It's an easy way to boost performance in distributed REST applications Next, let's take a look at the JSP that contacts.jsp We've explored the Model and Controller aspects of MVC. The last piece to the puzzle is the View. Most JAX-RS based interactions perform a more automated conversion of objects like our Contact to a data-oriented view, such as XML or JSon. Traditionally, Java EE MVC has been done with a more manual View management with languages such as JSP. Our JSP will take a Contracts instance created in ContractsResources.viewAll() and render it in basic HTML: Hello Contacts! Hello ${contact.firstName} ${contact.lastName} Save a contact, save the world: First Name: Last Name: This JSP loops over all contacts and adds links to their data-oriented View. It also creates a simple HTML form for creating a new Contact. While this JSP is simple, it will help us exercise three of our ContactsResource Controller: viewAll(), .saveContactForm(), and get(). It could also be a spring board for more complicated AJAX/JSon interaction, but that's beyond the scope of this article. The code and configuration is now complete, so let's run this project! Jetty Running Jetty is rather simple. You've seen most of the details of the pom.xml when we previously discussed it. Running jetty through maven involves running the following command: mvn jetty:run (If you haven't done so already, download the file as a tar ball, and change the pom.xml's version of the two RESTEasy dependencies to 1.2.RC1) That command will launch Jetty, and allow you to browse our project at http://localhost:8080/contacts. Add a few contacts, and view them either as a group at /contacts in HTML, as a group in XML at /contact/data, or individually as XML by following the links found at /contacts. Congratulations. You now have a running Spring MVC/RESTEasy application. We need one more thing to consider this application complete: a JUnit test. ContractTest.java RESTEasy provides a mechanism for easily launching a Spring MVC/RESTEasy application. RESTEasy also comes with a robust REST client framework. This article will cover bits and pieces of the test, but you can view the entire code in the RESTEasy SVN. To start, we're going to set up an interface that the RESTEasy client can use to create a client for our application. It consists of abstract methods annotated with JAX-RS annotations: @Path(ContactsResource.CONTACTS_URL) public interface ContactProxy { @Path("data") @POST @Consumes(MediaType.APPLICATION_XML) Response createContact(Contact contact); @GET @Produces(MediaType.APPLICATION_XML) Contact getContact(@ClientURI String uri); @GET String getString(@ClientURI String uri); } All methods on ContactProxy inherit the ContactsResource.CONTACTS_URL path ("/contacts") as the root URL, just like a server-side JAX-RS resource. This interface's has three methods: Create a contact - the createContact method maps to a POST to "/contacts/data". The method accepts a Contact object which will be converted to XML before it's sent to the server. The result is a JAX-RS Response object which contains the response status and headers. One of those headers includes the LOCATION of the new contact Get an XML Contact - Given a URL to a Contact, such as the URL returned by the createContact method's response's LOCATION header, GET an XML response and create a Contact object from it. Get a Response as a String - Given a URL, such as a Contact URL or anything else on the server, retrieve a String result. This interface will be used by RESTEasy to construct a concrete instance that uses the JAX-RS annotations to perform the actual HTTP calls. Next, let's create the embedded server and use RESTEasy to create that instance of a ContactProxy: private static ContactProxy proxy; private static TJWSEmbeddedSpringMVCServer server; public static final String host = "http://localhost:8080/"; @BeforeClass public static void setup() { server = new TJWSEmbeddedSpringMVCServer("classpath:springmvc-servlet.xml", 8080); server.start(); RegisterBuiltin.register(ResteasyProviderFactory.getInstance()); proxy = ProxyFactory.create(ContactProxy.class, host); } @AfterClass public static void end() { server.stop(); } JUnit invokes methods annotated by @BeforeClass before any test methods run. Methods annotated by @AfterClass are triggered by JUnit before after all test methods are complete. In our case, the setup method will instantiate a server that contains a SpringMVC Servlet on port 8080 that is configured by the same Spring XML configuration file we used in Jetty. It also invokes the two lines of code required to create a RESTEasy client. RegisterBuiltin sets up the RESTEasy run time, and must be run one time per client. ProxyFactory.create tells RESTEasy to read the annotations on the ContactProxy interface and to create a Java Proxy instance that knows how to perform the HTTP requests we'll need for our test: @Test public void testData() { Response response = proxy.createContact(new Contact("Solomon", "Duskis")); String duskisUri = (String) response.getMetadata().getFirst(HttpHeaderNames.LOCATION); Assert.assertTrue(duskisUri.endsWith(ContactsResource.CONTACTS_URL + "/data/Duskis")); Assert.assertEquals("Solomon", proxy.getContact(duskisUri).getFirstName()); ... } This test creates a new Contact, checks the server's response to make sure that the URL is consistent with the test's expectations. It then re-retrieves the Contact and confirms that the firstName is indeed what was sent sent in. While this is a pretty trivial looking test, it performs quite a bit of HTTP activity and business logic. Conclusion This article discussed quite a bit of philosophy and design considerations in building a RESTful web application with RESTEasy and Spring MVC. We also built an end to end application with RESTEasy, Spring MVC, Maven, Jetty and JUnit. Even though the content in this article was significant, the code presented here is relatively short compared to other Java alternatives. We touched on subjects like designing REST Applications, creating Spring applications, the RESTEasy client infrastructure and testing RESTful applications. Each of those subjects merit their own articles. There were also other subjects that we simply couldn't fit into this article (as long as it is), including JavaScript to the toolkit to allow closer integration between the browser and your RESTful application, integrating with Flex and more. The code presented in this article can serve as a spring board (again, no pun intended) for all of those ideas. About the Authors Solomon Duskis Solomon Duskis is a Senior Manager at SunGard Consulting Services. He's been developing for 22 years -- 12 years in professional capacity. He has experience in various industries such as Finance, Media, Insurance and Health. He contributes to Open Source projects such as JBoss Resteasy and the Spring framework. He is a published author of Spring Persistence - A Running Start, and the upcoming book Spring Persistence with Hibernate. Richard Burton Richard Burton is the co-founder of a small independent consulting firm called SmartCode LLC. He is an Open Source fanatic with over 10 years of experience in various industries such as Automotive, Insurance, Finance and fondly remembers the .com era. In his spare time, he contributes to Open Source projects such as SiteMesh 3, Struts 2, and more. Reference REST Roy Fielding's REST Thesis - Architectural Styles and the Design of Network-based Software Architectures(December 2000) Bill Burke's (August 2008 - DZone) How to GET a cup of coffee (October 2008 - InfoQ) Roy Fielding REST APIs Must Be Hypertext Driven (October 2008 - Untagled Roy's blog - take a look at the URI: roy.gbiv.com ) JAX-RS Bill Burke's (September 2008 - DZone) An overview of JAX-RS 1.0 Features James Strachan's JAX-RS as the one Java web framework to rule them all? (January 2009 - James' blog) RESTEasy RESTEasy project A blog about Spring + RESTEasy Getting Started with RESTEasy Spring Spring 2.5 reference Oh, just search google for "spring framework" Spring MVC Spring MVC Reference Spring MVC Step By Step Spring MVC Tutorial
October 15, 2009
by Solomon Duskis
· 140,455 Views · 1 Like
article thumbnail
Use Python Win32gui Draw Something And Get Info On Some Window Specialized By Points
import win32gui from re import match def draw_line(): print 'x1,y1,x2,y2?' s=raw_input() if match('\d+,\d+,\d+,\d+',s): x1,y1,x2,y2=s.split(',') x1=int(x1) y1=int(y1) x2=int(x2) y2=int(y2) hwnd=win32gui.WindowFromPoint((x1,y1)) hdc=win32gui.GetDC(hwnd) x1c,y1c=win32gui.ScreenToClient(hwnd,(x1,y1)) x2c,y2c=win32gui.ScreenToClient(hwnd,(x2,y2)) win32gui.MoveToEx(hdc,x1c,y1c) win32gui.LineTo(hdc,x2c,y2c) win32gui.ReleaseDC(hwnd,hdc) main() def draw_point(): print 'x,y,color?' s=raw_input() if match('\d+,\d+,\d+',s): x,y,color=s.split(',') x=int(x) y=int(y) color=int(color) hwnd=win32gui.WindowFromPoint((x,y)) hdc=win32gui.GetDC(hwnd) x1,y1=win32gui.ScreenToClient(hwnd,(x,y)) win32gui.SetPixel(hdc,x1,y1,color) win32gui.ReleaseDC(hwnd,hdc) main() def get_pixel_col(): print 'x,y?' s=raw_input() if match('\d+,\d+',s): x,y=s.split(',') x=int(x) y=int(y) hwnd=win32gui.WindowFromPoint((x,y)) hdc=win32gui.GetDC(hwnd) x1,y1=win32gui.ScreenToClient(hwnd,(x,y)) color=win32gui.GetPixel(hdc,x1,y1) win32gui.ReleaseDC(hwnd,hdc) print color main() def get_current_pos_info(): x,y=win32gui.GetCursorPos() hwnd=win32gui.WindowFromPoint((x,y)) hdc=win32gui.GetDC(hwnd) x1,y1=win32gui.ScreenToClient(hwnd,(x,y)) print x,y,win32gui.GetPixel(hdc,x1,y1) win32gui.ReleaseDC(hwnd,hdc) main() def main(): print ('''l. draw line p. draw point g. get pixel color c. get current mouse position's info''') s=raw_input() if s.lower()=='l': draw_line() if s.lower()=='p': draw_point() if s.lower()=='g': get_pixel_col() if s.lower()=='c': get_current_pos_info() main()
October 15, 2009
by Snippets Manager
· 5,485 Views
article thumbnail
Understanding the NHibernate Type System
This article is taken from NHibernate in Action from Manning Publications. This article delves into the NHibernate type system. For the table of contents, the Author Forum, and other resources, go to http://www.manning.com/kuate/. It is being reproduced here by permission from Manning Publications. Manning ebooks are sold exclusively through Manning. Visit the book's page for more information. Softbound print: February 2009 | 400 pages ISBN: 1932394923 Use code "dzone30" to get 30% off any version of this book. Entities are the coarse-grained classes in a system. You usually define the features of a system in terms of the entities involved: “the user places a bid for an item” is a typical feature definition that mentions three entities - user, bid and item. In contrast, value types are the much more fine grained classes in a system, such as strings, numbers, dates and monetary amounts. These fine grained classes can be used in many places and serve many purposes; the value type string can store email address, usernames and many other things. Strings are simple value types, but it is possible (but less common) to create value types that are more complex. For example, a value type could contain several fields, like an Address. So how do we differentiate between value types and entities? From a more formal standpoint, we an say an entity is any class whose instances have their own persistent identity, and a value type is a class who’s instances do not. The entity instances may therefore be in any of the three persistent lifecycle states: transient, detached, or persistent. However, we don’t consider these lifecycle states to apply to the simpler value type instances. Furthermore, because entities have their own lifecycle, the Save() and Delete() methods of the NHibernate ISession interface will apply to them, but never to value type instances. To illustrate, lets consider Figure A. Figure A – An order entity with TotalAmount value type TotalAmount is an instance of value type Money. Because value types are completely bound to that their owning entities, TotalAmount is only saved when the Order is saved. Associations and Value Types As we said, not all value types are simple. It’s possible for value types to also define associations. For example, our Money value type could have a property called Currency that is an association to a Currency entity as shown in figure 6.1.2 Figure B – The Money value type with association to a Currency entity. If your value types have associations, they must always point to entities. The reason is that, if those associations could point from entities to value types, a value type could potentially belong to several entities, which isn’t desirable. This is one of the great things about value types; if you update a value type instance, you know that it only affects the entity that owns it. For example, changing the TotalAmount of one Order simply cannot accidentally affect others. So far we’ve talked about value types and entities from an object oriented perspective. To build a more complete picture, we shall now take a look at how the relational model sees value types and entities, and how NHibernate bridges the gap. Bridging from objects to database You may be aware that a database architect would see the world of value types and entities slightly differently to this object oriented view of things. In the database, tables represent the entities, and columns represent the values. Even join tables and lookup tables are entities. So, if all tables represent entities in the database, does that mean we have to map all tables to entities in our .NET domain model? What about those value types we wanted in our model? NHibernate provides constructs for dealing with this. For example, a many-to-many association mapping hides the intermediate association table from the application, so we don’t end up with an unwanted entity in our domain model. Similarly, a collection of value typed strings behaves like a value type from the point of view of the .NET domain model even though it’s mapped to its own table in the database. These features have their uses and can often simplify your C# code. However, over time we have become suspicious of them; these “hidden” entities often end up needing exposure in our applications as business requirements evolve. The many-to-many association table, for example, often has additional columns added as the application matures, so the relationship itself becomes an entity. You might not go far wrong if you make every database-level entity be exposed to the application as an entity class. For example, we’d be inclined to model the many-to-many association as two one-to-many associations to an intervening entity class. We’ll leave the final decision to you, and return to the topic of many-to-many entity associations later in this chapter. Mapping types So far we’ve discussed the differences between value types and entities, as seen from the object oriented and relational database perspectives. We know that mapping entities is quite straight forward – entity classes are simply always mapped to database tables using , , and mapping elements. Value types need something more, which is where mapping types enter the picture. Consider this mapping of the CaveatEmptor User and email address: In ORM, you have to worry about both .NET types and SQL data types. In the example above imagine that the Email field is a .NET string, and EMAIL column is an SQL varchar. We want to tell NHibernate know how to carry out this conversion, which is where NHibernate mapping types come in. In this case, we’ve specified the mapping type "String", which we know is appropriate for this particular conversion. The String mapping type isn’t the only one built into NHibernate; NHibernate comes with various mapping types that define default persistence strategies for primitive .NET types and certain classes, such as like DateTime. Built-in mapping types NHibernate’s built-in mapping types usually reflect the name of the .NET type they map. Sometimes you’ll have a choice of mapping types available to map a particular .NET type to the database. However, the built-in mapping types aren’t designed to perform arbitrary conversions, such as mapping a VARCHAR field value to a .NET Int32 property value. If you want this kind of functionality, you will have to define your own custom value types. We will get to that topic a little later in this chapter. We’ll now discuss the basic types; date and time, objects, large objects, and various other built-in mapping types and show you what .NET and System.Data.DbType data types they handle. DbTypes are used to infer the data provider types (hence SQL data types). .NET primitive mapping types The basic mapping types in table A map .NET primitive types to appropriate DbTypes. Table A Primitive types Mapping Type .NET Type System.Data.DbType Int16 System.Int16 DbType.Int16 Int32 System.Int32 DbType.Int32 Int64 System.Int64 DbType.Int64 Single System.Single DbType.Single Double System.Double DbType.Double Decimal System.Decimal DbType.Decimal Byte System.Byte DbType.Byte Char System.Char DbType.StringFixedLength - 1 character AnsiChar System.Char DbType.AnsiStringFixedLength - 1 character Boolean System.Boolean DbType.Boolean Guid System.Guid DbType.Guid PersistentEnum System.Enum (an enumeration) The DbType for the underlying value TrueFalse System.Boolean DbType.AnsiStringFixedLength - either 'T' or 'F' YesNo System.Boolean DbType.AnsiStringFixedLength - either 'Y' or 'N' You’ve probably noticed that your database doesn’t support some of the DbTypes listed in table A. However, ADO.NET provides a partial abstraction of vendor-specific SQL data types, allowing NHibernate to work with ANSI-standard types when executing data manipulation language (DML). For database-specific DDL generation, NHibernate translates from the ANSI-standard type to an appropriate vendor-specific type, using the built-in support for specific SQL dialects. (You usually don’t have to worry about SQL data types if you’re using NHibernate for data access and data schema definition.) NHibernate supports a number of mapping types coming from Hibernate for compatibility (useful for those coming over from Hibernate or using Hibernate tools to generate hbm.xml files). Table B lists the additional names of NHibernate mapping types. Table B Additional names of NHibernate mapping types Mapping type Additional name Binary binary Boolean boolean Byte byte Character character CultureInfo locale DateTime datetime Decimal big_decimal Double double Guid guid Int16 short Int32 int Int32 integer Int64 long Single float String string TrueFalse true_false Type class YesNo yes_no From this table, you can see that writing type="integer" or type="int" is identical to type="Int32". Note that this table contains many mapping types that will be discussed in the following sections. Date/time mapping types Table C lists NHibernate types associated with dates, times, and timestamps. In your domain model, you may choose to represent date and time data using either System.DateTime or System.TimeSpan. As they have different purposes, the choice should be easy. Table C Date and time typesExcerptOpenSourceSOAch5-6.doc Mapping Type .NET Type System.Data.DbType DateTime System.DateTime DbType.DateTime - ignores the milliseconds Ticks System.DateTime DbType.Int64 TimeSpan System.TimeSpan DbType.Int64 Timestamp System.DateTime DbType.DateTime - as specific as database supports Object mapping types All .NET types in tables A and C are value types (i.e. derived from System.ValueType). This means that they can’t be null; unless you use the .NET 2.0 Nullable structure or the Nullables add-in, as discussed in the next section. Table D lists NHibernate types for handling .NET types derived from System.Object (which can store null values). Table D Nullable object typesExcerptOpenSourceSOAch5-6.doc Mapping Type .NET Type System.Data.DbType String System.String DbType.String AnsiString System.String DbType.AnsiString This table is completed by tables E and F which also contain nullable mapping types. Large object mapping types Table E lists NHibernate types for handling binary data and large objects. Note that none of these types may be used as the type of an identifier property. Table E Binary and large object typesExcerptOpenSourceSOAch5-6.doc Mapping Type .NET Type System.Data.DbType Binary System.Byte[] DbType.Binary BinaryBlob System.Byte[] DbType.Binary StringBlob System.String DbType.String Serializable Any System.Object marked with SerializableAttribute DbType.Binary BinaryBlob and StringClob are mainly supported by SQL Server. They can have a very large size and are fully loaded in memory. This can be a performance killer if used to store very large objects. So use this feature carefully. Note that you must set the NHibernate property "prepare_sql" to "true" to enable this feature. You can find up-to-date design patterns and tips for large object usage on the NHibernate website. Various CLR mapping types Table F lists NHibernate types for various other types of the CLR that may be represented as DbType.Strings in the database. Table F Other CLR-related typesExcerptOpenSourceSOAch5-6.doc Mapping Type .NET Type System.Data.DbType CultureInfo System.Globalization.CultureInfo DbType.String - 5 chars for culture Type System.Type DbType.String holding Assembly Qualified Name Certainly, isn’t the only NHibernate mapping element that has a type attribute. Using mapping types All of the basic mapping types may appear almost anywhere in the NHibernate mapping document, on normal property, identifier property, and other mapping elements. The , , , , , and elements all define an attribute named type. (There are certain limitations on which mapping basic types may function as an identifier or discriminator type, however.) You can see how useful the built-in mapping types are in this mapping for the BillingDetails class: ... The BillingDetails class is mapped as an entity. Its discriminator, id, and Number properties are value typed, and we use the built-in NHibernate mapping types to specify the conversion strategy. It’s often not necessary to explicitly specify a built-in mapping type in the XML mapping document. For instance, if you have a property of .NET type System.String, NHibernate will discover this using reflection and select String by default. We can easily simplify the previous mapping example: .... For each of the built-in mapping types, a constant is defined by the class NHibernate. NHibernateUtil. For example, NHibernate.String represents the String mapping type. These constants are useful for query parameter binding, as discussed in more detail in chapter 8: session.CreateQuery("from Item i where i.Description like :desc") .SetParameter("desc", desc, NHibernate.String) .List(); These constants are also useful for programmatic manipulation of the NHibernate mapping metamodel, as discussed in chapter 3. Of course, NHibernate isn’t limited to the built-in mapping types; you can create your own custom mapping types for handling certain scenarios. We’ll take a look this next, and explain how the mapping type system is a central to NHibernates flexibility. Creating custom mapping types Object-oriented languages like C# make it easy to define new types by writing new classes. Indeed, this is a fundamental part of the definition of object orientation. If you were limited to the predefined built-in NHibernate mapping types when declaring properties of persistent classes, you’d lose much of C#’s expressiveness. Furthermore, your domain model implementation would be tightly coupled to the physical data model, since new type conversions would be impossible. In order to avoid that, NHibernate provides a very powerful feature called custom mapping types. NHibernate provides two user-friendly interfaces that applications may use when defining new mapping types, the first being NHibernate.UserTypes.IUserType. IUserType is suitable for most simple cases and even for some more complex problems. Let’s use it in a simple scenario. Our Bid class defines an Amount property and our Item class defines an InitialPrice property, both monetary values. So far, we’ve only used a simple System.Double to represent the value, mapped with Double to a single DbType.Double column. Suppose we wanted to support multiple currencies in our auction application and that we had to refactor the existing domain model for this change. One way to implement this change would be to add new properties to Bid and Item: AmountCurrency and InitialPriceCurrency. We would then map these new properties to additional VARCHAR columns with the built-in String mapping type. Imagine if we had currency stored in 100 places, this would be lots of changes. We hope you never use this approach! Creating an implementation of IUserType Instead, we should create a MonetaryAmount class that encapsulates both currency and amount. This is a class of the domain model and doesn’t have any dependency on NHibernate interfaces: [Serializable] public class MonetaryAmount { private readonly double value; private readonly string currency; public MonetaryAmount(double value, string currency) { this.value = value; this.currency = currency; } public double Value { get { return value; } } public string Currency { get { return currency; } } public override bool Equals(object obj) { ... } public override int GetHashCode() { ... } } We’ve also made life simpler by making MonetaryAmount an immutable class, meaning it can’t be changed after it’s instantiated. We would have to implement Equals() and GetHashCode() to complete the class - but there is nothing special to consider here aside that they must be consistent, and GetHashCode() should return mostly unique numbers. We will use this new MonetaryAmount to replace the Double, as defined on the InitialPrice property for Item. We would benefit by using this new class in other places, such as the Bid.Amount. The next challenge is in mapping our new MonetaryAmount properties to the database. Suppose we’re working with a legacy database that contains all monetary amounts in USD. Our new class means our application code is no longer restricted to a single currency, but it will take time to get the changes done by the database team. Until this happens, we’d like to store just the Amount property of MonetaryAmount to the database. Because we can’t store the currency yet, we’ll convert all Amounts to USD before we save them, and from USD when we load them. The first step to handling this is to tell NHibernate how to handle our Monetarymount type. To do this, we create a MonetaryAmountUserType class that implements the NHibernate interface IUserType. Our custom mapping type is shown in listing A. Listing A Custom mapping type for monetary amounts in USD using System; using System.Data; using NHibernate.UserTypes; public class MonetaryAmountUserType : IUserType { private static readonly NHibernate.SqlTypes.SqlType[] SQL_TYPES = { NHibernateUtil.Double.SqlType }; public NHibernate.SqlTypes.SqlType[] SqlTypes { |1 get { return SQL_TYPES; } } public Type ReturnedType { get { return typeof(MonetaryAmount); } } |2 public new bool Equals( object x, object y ) { |3 if ( object.ReferenceEquals(x,y) ) return true; if (x == null || y == null) return false; return x.Equals(y); } public object DeepCopy(object value) { return value; } |4 public bool IsMutable { get { return false; } } |5 public object NullSafeGet(IDataReader dr, string[] names, object owner){ |6 object obj = NHibernateUtil.Double.NullSafeGet(dr, names[0]); if ( obj==null ) return null; double valueInUSD = (double) obj; return new MonetaryAmount(valueInUSD, "USD"); } public void NullSafeSet(IDbCommand cmd, object obj, int index) { |7 if (obj == null) { ((IDataParameter)cmd.Parameters[index]).Value = DBNull.Value; } else { MonetaryAmount anyCurrency = (MonetaryAmount)obj; MonetaryAmount amountInUSD = MonetaryAmount.Convert( anyCurrency, "USD" ); ((IDataParameter)cmd.Parameters[index]).Value = amountInUSD.Value; } } public static MonetaryAmount Convert( MonetaryAmount m, string targetCurrency) { ... |8 } } The SqlTypes property tells NHibernate what SQL column types to use for DDL schema generation, as seen in #1. The types are subclasses of NHibernate.SqlTypes.SqlType. Notice that this property returns an array of types. An implementation of IUserType may map a single property to multiple columns, but our legacy data model only has a single Double. In #2, we can see that ReturnedType tells NHibernate what .NET type is mapped by this IUserType. The IUserType is responsible for dirty-checking property values (#3). The Equals() method compares the current property value to a previous snapshot and determines whether the property is dirty and must by saved to the database. The IUserType is also partially responsible for creating the snapshot in the first place, as shown in #4. Since MonetaryAmount is an immutable class, the DeepCopy() method returns its argument. In the case of a mutable type, it would need to return a copy of the argument to be used as the snapshot value. This method is also called when an instance of the type is written to or read from the second level cache. NHibernate can make some minor performance optimizations for immutable types. The IsMutable (#5) property tells NHibernate that this type is immutable. The NullSafeGet() method shown near #6 retrieves the property value from the ADO.NET IDataReader. You can also access the owner of the component if you need it for the conversion. All database values are in USD, so you have to convert the MonetaryAmount returned by this method before you show it to the user. In #7, the NullSafeSet() method writes the property value to the ADO.NET IDbCommand. This method takes whatever currency is set and converts it to a simple Double USD value before saving. Note that, for briefness, we haven’t provided a Convert function as shown in #8. If we were to implement it, it would have code that converts between various currencies. Mapping the InitialPrice property of Item can be done as follows: This is the simplest kind of transformation that an implementation of IUserType could perform. It takes a Value Type class and maps it to a single database column. Much more sophisticated things are possible; a custom mapping type could perform validation, it could read and write data to and from an Active Directory, or it could even retrieve persistent objects from a different NHibernate ISession for a different database. You’re limited mainly by your imagination and performance concerns! In a perfect world, we’d prefer to represent both the amount and currency of our monetary amounts in the database, so we’re not limited to storing just USD. We could still use an IUserType for this, but it’s limited; If an IUserType is mapped with more than one property, we can’t use them our HQL or Criteria queries. The NHibernate query engine wouldn’t know anything about the individual properties of MonetaryAmount. You still access the properties in your C# code (MonetaryAmount is just a regular class of the domain model, after all), but not in NHibernate queries. To allow for a custom value type with multiple properties that can be accessed in queries, we should use the ICompositeUserType interface. This interface exposes the properties of our MonetaryAmount to NHibernate. Creating an implementation of ICompositeUserType To demonstrate the flexibility of custom mapping types, we won’t have to change our MonetaryAmount domain model class at all—we change only the custom mapping type, as shown in listing B. Listing B Custom mapping type for monetary amounts in new database schemas using System; using System.Data; using NHibernate.UserTypes; public class MonetaryAmountCompositeUserType : ICompositeUserType { public Type ReturnedClass { get { return typeof(MonetaryAmount); } } public new bool Equals( object x, object y ) { if ( object.ReferenceEquals(x,y) ) return true; if (x == null || y == null) return false; return x.Equals(y); } public object DeepCopy(object value) { return value; } public bool IsMutable { get { return false; } } public object NullSafeGet(IDataReader dr, string[] names, NHibernate.Engine.ISessionImplementor session, object owner) { object obj0 = NHibernateUtil.Double.NullSafeGet(dr, names[0]); object obj1 = NHibernateUtil.String.NullSafeGet(dr, names[1]); if ( obj0==null || obj1==null ) return null; double value = (double) obj0; string currency = (string) obj1; return new MonetaryAmount(value, currency); } public void NullSafeSet(IDbCommand cmd, object obj, int index, NHibernate.Engine.ISessionImplementor session) { if (obj == null) { ((IDataParameter)cmd.Parameters[index]).Value = DBNull.Value; ((IDataParameter)cmd.Parameters[index+1]).Value = DBNull.Value; } else { MonetaryAmount amount = (MonetaryAmount)obj; ((IDataParameter)cmd.Parameters[index]).Value = amount.Value; ((IDataParameter)cmd.Parameters[index+1]).Value = amount.Currency; } } public string[] PropertyNames { |1 get { return new string[] { "Value", "Currency" }; } } public NHibernate.Type.IType[] PropertyTypes { |2 get { return new NHibernate.Type.IType[] { NHibernateUtil.Double, NHibernateUtil.String }; } } public object GetPropertyValue(object component, int property) { |3 MonetaryAmount amount = (MonetaryAmount) component; if (property == 0) return amount.Value; else return amount.Currency; } public void SetPropertyValue(object comp, int property, object value) { |4 throw new Exception("Immutable!"); } public object Assemble(object cached, |5 NHibernate.Engine.ISessionImplementor session, object owner) { return cached; } public object Disassemble(object value, |6 NHibernate.Engine.ISessionImplementor session) { return value; } } #1 shows how an implementation of ICompositeUserType has its own properties, defined by PropertyNames. Similarly, the properties each have their own type, as defined by PropertyTypes (#2). The GetPropertyValue() method, shown in #3, returns the value of an individual property of the MonetaryAmount. Since MonetaryAmount is immutable, we can’t set property values individually (see #4) This isn’t a problem because this method is optional anyway. In #5, the Assemble() method is called when an instance of the type is read from the second-level cache. The Disassemble() method is called when an instance of the type is written to the second-level cache, as shown in #6. The order of properties must be the same in the PropertyNames, PropertyTypes, and GetPropertyValues() methods. The InitialPrice property now maps to two columns, so we declare both in the mapping file. The first column stores the value; the second stores the currency of the MonetaryAmount. Note that the order of columns must match the order of properties in your type implementation: In a query, we can now refer to the Amount and Currency properties of the custom type, even though they don’t appear anywhere in the mapping document as individual properties: from Item i where i.InitialPrice.Value > 100.0 and i.InitialPrice.Currency = 'XAF' In this example we’ve expanded the buffer between the .NET object model and the SQL database schema with our custom composite type. Both representations can now handle changes more robustly. If implementing custom types seems complex, relax; you rarely need to use a custom mapping type. An alternative way to represent the MonetaryAmount class is to use a component mapping, as in section 4.4.2, “Using components.” The decision to use a custom mapping type is often a matter of taste. There are few more interfaces that can be used to implement custom types; they are introduced in the next section. Other interfaces to create custom mapping types You may find that the interfaces IUserType and ICompositeUserType do not allow you to easily add more features to your custom types. In this case, you will need to use some of the other interfaces which are in the NHibernate.UserTypes namespace: The IParameterizedType interface allows you to supply parameters to your custom type in the mapping file. This interface has a unique method: SetParameterValues(IDictionary parameters) that will be called at the initialization of your type. Here is an example of mapping providing a parameter: Euro This mapping tells the custom type to use Euro as currency if it isn’t specified. The IEnhancedUserType interface makes it possible to implement a custom type that can be marshalled to and from its string representation. This functionality allows this type to be used as identifier or discriminator type. To create a type that can be used as version, you must implement the IUserVersionType interface. The INullableUserType interface allows you to interpret non-null values in a property as null in the database. When using dynamic-insert or dynamic-update, fields identified as null will not be inserted or updated. This information may also be used when generating the where clause of the SQL command when optimistic locking is enabled. The last interface is different from the previous because it is meant to implement user defined collection types: IUserCollectionType. For more details, take a look at the implementation NHibernate.Test.UserCollection.MyListType in the source code of NHibernate. Now, let’s look at an extremely important application of custom mapping types. Nullable types are found in almost all enterprise applications. Using Nullable types With .NET 1.1, primitive types can not be null; but this is no longer the case in .NET 2.0. Let’s say that we want to add a DismissDate to the class User. As long as a user is active, its DismissDate should be null. But the System.DateTime struct can not be null. And we don’t want to use some "magic" value to represent the null state. With .NET 2.0 (and 3.5 of course), you can simply write: public class User { ... private DateTime? dismissDate; public DateTime? DismissDate { get { return dismissDate; } set { dismissDate = value; } } ... } We omit other properties and methods because we focus on the nullable property. And no change is required in the mapping. If you work with .NET 1.1, the Nullables add-in (in the NHibernateContrib package for versions prior to NHibernate 1.2.0) contains a number of custom mapping types which allow primitive types to be null. For our previous case, we can use the Nullables.NullableDateTime class: using Nullables; [Class] public class User { ... private NullableDateTime dismissDate; [Property] public NullableDateTime DismissDate { get { return dismissDate; } set { dismissDate = value; } } ... } The mapping is quite straightforward: ... It is important to note that, in the mapping, the type of DismissDate must be Nullables.NHibernate.NullableDateTimeType (from the file Nullables.NHibernate.dll). This type is a wrapper used to translate Nullables types from/to database types. But if when using the NHibernate.Mapping.Attributes library, this operation is automatic, that’s why we just had to put the attribute [Property]. The NullableDateTime type behaves exactly like System.DateTime; there are even implicit operators to easily interact with it. The Nullables library contains nullable types for most .NET primitive types supported by NHibernate. You can find more details in NHibernate documentation. Using enumerated types An enumeration (enum) is a special form of value type, which inherits from System.Enum and supplies alternate names for the values of an underlying primitive type. For example, the Comment class defines a Rating. If you recall, in our CaveatEmptor application, users are able to give other comments about other users. Instead of using a simple int property for the rating, we create an enumeration: public enum Rating { Excellent, Ok, Low } We then use this type for the Rating property of our Comment class. In the database, ratings would be represented as the type of the underlying value. In this case (and by default), it is Int32. And that’s all we have to do. We may specify type="Rating" in our mapping, but it is optional; NHibernate can use reflection to find this. One problem you might run into is using enumerations in NHibernate queries. Consider the following query in HQL that retrieves all comments rated “Low”: IQuery q = session.CreateQuery("from Comment c where c.Rating = Rating.Low"); This query doesn’t work, because NHibernate doesn’t know what to do with Rating.Low and will try to use it as a literal. We have to use a bind parameter and set the rating value for the comparison dynamically (which is what we need for other reasons most of the time): IQuery q = session.CreateQuery("from Comment c where c.Rating = :rating"); q.SetParameter("rating", Rating.Low, NHibernateUtil.Enum(typeof(Rating)); The last line in this example uses the static helper method NHibernateUtil.Enum() to define the NHibernate Type, a simple way to tell NHibernate about our enumeration mapping and how to deal with the Rating.Low value. We’ve now discussed all kinds of NHibernate mapping types: built-in mapping types, user-defined custom types, and even components. They’re all considered value types, because they map objects of value type (not entities) to the database. With a good understanding of what value types are, and how they are mapped, you can now move on to the more complex issue of collections of value typed instances.
October 8, 2009
by Alvin Ashcraft
· 68,472 Views · 1 Like
article thumbnail
Creating a Custom JSF 1.2 Component - With Facets, Resource Handling, Events and Listeners
I occasionally create custom JavaServer Faces components. Just enough to sort of remember what the steps are, but not nearly frequently enough to quickly put a new component together. This article demonstrates the quick step approach to creating a new custom component in the old fashioned way (that means: it is not a Facelets template based or an ADF Faces 11g Declarative Component). Its primary purpose is to help me quickly retrace my steps. But perhaps it will benefit some of you as well. The Shuffler component I will develop supports facets. It will render its facet children - one after the other. Which one is rendered first can be indicated through an attribute facetOrder (values normal, reverse and random), which is EL enabled. A shuffler-method-expression can optionally be set to provide the Shuffler with a shuffle-order-processor: the method is invoked with the list of facets to shuffle and will return it in the order in which to render the children. The component can render with a shuffle icon that when pressed causes the children to be shuffled. The Shuffler component allows registration of Shuffle Event Listeners, custom listeners that are informed whenever the shuffle event occurs. An example of how the Shuffler can be used inside a JSF page: Some elements of custom JSF components that are explicitly discussed in this article: dynamic attributes of type ValueExpression (EL enabled) attributes of type MethodExpression (also EL enabled) facets (custom) events and listeners Bare essentials for custom JSF components A custom JSF component is represented by a Java Class - one that extends from UIComponentBase. An instance of this class is created whenever a new page is rendered that contains the component (and for each occurrence of the component in the page, a new instance of the class is created). The component class holds the attributes that are set by the page developer and that determine the behavior and appearance of the component. The component class has the internal logic of the component and it deals for example with events and listeners. This class may also render the markup (HTML) for the component - though it is a better practice to leave the actual rendering to a Renderer class. public class Shuffler extends UIComponentBase { ... } Most custom JSF component will also have an associated Renderer class, that extends from Renderer. Note that some components will not actually be rendered (such as Listeners, Iterators or Parameters) and therefore will not have a Renderer class. The Renderer is not only responsible for rendering the HTML, it will also inspect (decode) the incoming request from the browser to see whether the request parameter map contains values that are of interest to the component - that indicate for example that a value has been entered or set on the component(’s representation in the browser) or an action has been executed against it. Note that one JSF component may have multiple Renderers, for example for different channels and protocols (to render a representation of the component in plain XML, in WML, in JavaFX or XUL) or for different user agents (Firefox, Internet Explorer) or themes (professional user, internet surfer). public class ShufflerRenderer extends SuperRenderer { ... } JavaServer Faces pages can be created in various ways - including programmatically, using Facelets and using JSP pages. The latter option, through JSP, is still the most common one, though that is about to change with JSF 2.0 favoring Facelets. Page developers using JSPs will describe the JSF component tree that will need to be instantiated in memory for rendering a certain View using a plain JSP page. The tags in the JSP page are normal JSP tags - described by TLD (tag library descriptors) - corresponding to JSF components and therefore JSF component classes. Every JSF component that needs to be used in JSPs has to have a corresponding JSP tag-class, one that will typically extend from UIComponentELTag (or just from TagSupport when no JSF component is added to the component tree for a certain tag, for example when that tag represents a listener or parameter). The Tag Class specifies which JSF Component it represents. It also indicates which Renderer should be used to render the component. This means that one component can have multiple JSP tags associated with it, each providing a different way of rendering the component. Note: the renderer can also be specified dynamically - taking user preferences or characteristics into account public class ShufflerTag extends UIComponentELTag { public static final String COMPONENT_TYPE = "nl.amis.jsf.UIShuffler"; public static final String RENDERER_TYPE = "nl.amis.jsf.ShufflerRenderer"; public String getComponentType() { return COMPONENT_TYPE; } public String getRendererType() { return RENDERER_TYPE; } ... } Tags representing JSF components need to be described in TLD files (Tag Library Descriptors) just like any other JSP tag.The entry in the TLD defines the tag label to use in the page, whether the tag can contain child-tags, some descriptive meta data and every attribute that can be configured in the tag. For each attribute the TLD-entry specifies the type, whether it is required and if the attribute can contain an EL expression passing in a value or an EL expression passing in a method; in the latter case, the entry also prescribes the signature of the method: ShufflerLib 1.0 ShufflerLib /nl.amis,jsf/ShufflerLib Writes a DIV element that contains the facets in a specific order. shuffler nl.amis.jsf.shuffler.ShufflerTag JSP id false true rendered false boolean binding false javax.faces.component.UIComponent styleClass false java.lang.String .... JSF components need to be registered in a special faces-config.xml file (special in the sense that it is not the faces-config.xml that drives a web application but rather one that acts like a repository of components and their renderers. Note however that all entries in this special faces-config.xml is merged together with the ‘normal’ faces-config.xml. That means in turn that while the special file is primarily seen as the registry of components, it can also configure PhaseListeners, Navigation Rules (hard to see the value in that) and Managed Beans (which can be very useful). The component registration in faces-config.xml consists of a component type that is associated with a the component class. nl.amis.jsf.UIShuffler nl.amis.jsf.shuffler.Shuffler Renderers can also be registered in this file. A renderer entry registers a renderer-type (corresponding to the value returned by the getRendererType() method in the tag class) associated with the RendererClass. Based on the value (rendererType) returned by the tag class, the correct class to instantiate can be determined from this entry: nl.amis.Shuffler nl.amis.jsf.ShufflerRenderer nl.amis.jsf.shuffler.ShufflerRenderer Implementing the Classes: Component, Renderer and TagHandler The TagHandler ShufflerTag is the intermediary between the world of JSP pages (and the Servlet/JSP engine that translates the JSP file into a servlet class) and the JSF realm. Every tag in the JSP page needs to be turned into its corresponding JSF representation. The tag handler needs to override the setProperties() method inherited from the UIComponentELTag class; this method takes all the values set on the tag attributes in the page and passes them onwards to the Component. In our initial case, the tag is used in JSPs like this: ... other content The styleClass attribute is the only one we defined - id and rendered are defined on every JSP-tag based on JSF’s UIComponentELTag. Thye styleClass attribute is also the only attribute we need to take responsibility for in the tag class, by providing a setter method that sets a private member and by passing the value of that private member to the component in the setProperties() method. The code for the ShufflerTag class now becomes: package nl.amis.jsf.shuffler; import javax.el.ValueExpression; import javax.faces.component.UIComponent; import javax.faces.webapp.UIComponentELTag; public class ShufflerTag extends UIComponentELTag { public static final String COMPONENT_TYPE = "nl.amis.jsf.UIShuffler"; public static final String RENDERER_TYPE = "nl.amis.jsf.ShufflerRenderer"; private ValueExpression styleClass; public String getComponentType() { return COMPONENT_TYPE; } public String getRendererType() { return RENDERER_TYPE; } protected void setProperties(UIComponent component) { super.setProperties(component); processProperty(component, styleClass, Shuffler.STYLECLASS_ATTRIBUTE_KEY); } public void release() { super.release(); styleClass= null; } protected final void processProperty(final UIComponent component, final ValueExpression property, final String propertyName) { if (property != null) { if(property.isLiteralText()) { component.getAttributes().put(propertyName, property.getExpressionString()); } else { component.setValueExpression(propertyName, property); } } } public void setStyleClass(ValueExpression styleClass) { this.styleClass = styleClass; } } We cater for the fact that styleClass can contain a ValueExpression - as all attributes can, starting from JSF 1.2. In the method processProperty we check whether the string passed for styleClass is a literal string or should be considered an EL expression. In the latter case, we pass a ValueExpression to the component, otherwise a ‘normal’ attribute. Also note that the super class takes care of the attributes id, rendered and binding. However, we do have to specify them in the tag-library. The component class in our case leads a pretty comfortable life: the tag handler informs him of all the attribute values and the actual rendering is left to a special Renderer class. The component is a pretty passive element in this simple example: package nl.amis.jsf.shuffler; import javax.faces.component.UIComponentBase; import javax.faces.context.FacesContext; public class Shuffler extends UIComponentBase { public static final String FAMILY = "nl.amis.Shuffler"; public static final String STYLECLASS_ATTRIBUTE_KEY = "styleClass"; public String getFamily() { return FAMILY; } @Override public Object saveState(FacesContext facesContext) { Object values[] = new Object[2]; values[0] = super.saveState(facesContext); values[1] = this.getAttributes().get(STYLECLASS_ATTRIBUTE_KEY); return values; } @Override public void restoreState(FacesContext facesContext, Object state) { Object values[] = (Object[])state; super.restoreState(facesContext, values[0]); this.getAttributes().put(STYLECLASS_ATTRIBUTE_KEY, values[1]); } } The only really useful thing the component does is implementing the saveState and restoreState methods. These methods play an important part in turning the state of the component into a serializable object array and restoring that state of the component in the RestoreView phase, based on the serialized array. The Tag Handler specifies in its getRendererType() method that the renderer to use for this component when using the shuffler tag, is one called nl.amis.jsf.ShufflerRenderer. In the faces-config.xml file, we have indicated that this renderer type is associated with the class nl.amis.jsf.shuffler.ShufflerRenderer that extends Renderer. The renderers in JSF can override methods like encodeBegin(), encodeEnd(), encodeChildren() and decode() - the latter only when we have to process the incoming request, looking for new values set on or events that occurred on the component. In our case, we initially will simply have the ShufflerRenderer render a DIV element with a class attribute (based on the styleClass attribute). The DIV will allow the children of the Shuffler component to render - by not overriding the encodeChildren() method. package nl.amis.jsf.shuffler; import javax.faces.context.FacesContext; import javax.faces.context.ResponseWriter; import javax.faces.render.Renderer; public class ShufflerRenderer extends Renderer { @Override public void encodeBegin(final FacesContext facesContext, final UIComponent component) throws IOException { super.encodeBegin(facesContext, component); final ResponseWriter writer = facesContext.getResponseWriter(); writer.startElement("DIV", component); String styleClass = (String)attributes.get(Shuffler.STYLECLASS_ATTRIBUTE_KEY); writer.writeAttribute("class", styleClass, null); } @Override public void encodeEnd(final FacesContext facesContext, final UIComponent component) throws IOException { final ResponseWriter writer = facesContext.getResponseWriter(); writer.endElement("DIV"); } } Next steps - working with facets The Shuffler component is created to dynamically (re)order its child contents. It will do so using facets. The content you want this component to shuffle is passed in two or more facets. The facets are named using string representations of integers, so for example: ... content ... content ... content Facets are automatically supported on JSF components. The getFacets() method is available inside the Shuffler component class and will return a collection of facet UIComponents. Facets are special children for a JSF component: the framework will never render the contents of facets on its own. It is up to the component to determine when and how to render the contents of its facets. So, there is some work to do for the ShuffleRenderer class. But first we need to add support for the new facetOrder attribute. Adding an attribute means: adding an attribute entry in the TLD adding support for processing the attribute in the Tag Handler (a setter and a line of code in setProperties()) adding the attribute in saveState() and restoreState() in the Component class Here we go: In the tld entry, add: facetOrder false java.lang.String In the tag-handler class ShufflerTag add: private ValueExpression facetOrder; public void setFacetOrder(ValueExpression facetOrder) { this.facetOrder = facetOrder; } and in setProperties(): processProperty(component, facetOrder, Shuffler.FACETORDER_ATTRIBUTE_KEY); Finally in the component class Shuffler , add: public static final String FACETORDER_ATTRIBUTE_KEY = "facetOrder"; @Override public Object saveState(FacesContext facesContext) { Object values[] = new Object[3]; values[0] = super.saveState(facesContext); values[1] = this.getAttributes().get(STYLECLASS_ATTRIBUTE_KEY); values[2] = this.getAttributes().get(FACETORDER_ATTRIBUTE_KEY); return values; } @Override public void restoreState(FacesContext facesContext, Object state) { Object values[] = (Object[])state; super.restoreState(facesContext, values[0]); this.getAttributes().put(STYLECLASS_ATTRIBUTE_KEY, values[1]); this.getAttributes().put(FACETORDER_ATTRIBUTE_KEY, values[2]); } The Shuffler also needs to make the facets available to the renderer, in the order that is prescribed by the facetOrder attribute. This attribute supports three values: normal, reverse and random. public List getOrderedFacets(FacesContext facesContext) { // allowable values: normal (default) and reverse // the normal order of the facets is determined by ordering the facets by name (assuming the facetnames are string representations of integers) // create a sorted list with the integers representing the facets List facetIndexValues = new ArrayList(); List facetNames = new ArrayList(getFacets().keySet()); for (String facetName : facetNames) { facetIndexValues.add(new Integer(facetName)); } Collections.sort(facetIndexValues); // create the list of facets corrresponding to the sorted list of facet index values List orderedFacets = new ArrayList(); for (Integer index : facetIndexValues) { orderedFacets.add(getFacets().get(index.toString())); } // depending on the value for the facetOrder attribute, we may need to reorganize the orderedFacets list String facetOrder = (String)this.getAttributes().get(Shuffler.FACETORDER_ATTRIBUTE_KEY); if ("reverse".equalsIgnoreCase(facetOrder)) { Collections.reverse(orderedFacets); } else if ("random".equalsIgnoreCase(facetOrder)) { Collections.shuffle(orderedFacets); } else if ("normal".equalsIgnoreCase(facetOrder)) { // need to do nothing as with normal the order returned by getFacets() is the correct one } return orderedFacets; } The ShufflerRenderer will have to do the real work. It will retrieve the facets - in the proper order - from the Shuffler Component class and ask JSF to render them. package nl.amis.jsf.shuffler; import javax.faces.context.FacesContext; import javax.faces.context.ResponseWriter; import javax.faces.render.Renderer; import javax.faces.component.UIComponent; public class ShufflerRenderer extends Renderer { @Override public void encodeBegin(final FacesContext facesContext, final UIComponent component) throws IOException { super.encodeBegin(facesContext, component); final ResponseWriter writer = facesContext.getResponseWriter(); writer.startElement("DIV", component); String styleClass = (String)attributes.get(Shuffler.STYLECLASS_ATTRIBUTE_KEY); writer.writeAttribute("class", styleClass, null); List orderedFacets = ((Shuffler)component).getOrderedFacets(facesContext); for (UIComponent facet:orderedFacets) { facet.encodeAll(facesContext); } } @Override public void encodeEnd(final FacesContext facesContext, final UIComponent component) throws IOException { final ResponseWriter writer = facesContext.getResponseWriter(); writer.endElement("DIV"); } } With these changes, we can now add real content to the Shuffler and have it rendered, in the order we specified - which can be random. Also note that we can use an EL expression to have the facetOrder dynamically derived: facetOrder="#{bean.liveFacetOrder}" id="s1" > ... content ... content ... content Downloading Resources The next step in our exploration of the development of custom JSF components is the addition of resources like images and JavaScript libraries. Note that in JavaServer Faces 2.0 a new facility is available, especially for this purpose. However, in our 1.2 setting we have to come up with something ourselves. That is not to say no solutions exist for JSF 1.2; almost every library comes with a form of resource handling. Then there is the Weblet framework that was introduced especially for this purpose. Another option leverages JSF itself: its capability through PhaseListeners to intercept a request, interpret the requested ViewId and optionally serve up an image or JS file in response to the request. This approach is proposed in JavaServer Faces, The Complete Reference by Ed Burns and Chris Schalk. I have slightly modified there code for my own purposes. However, the central idea clearly is theirs. My objective is to add an image to the Shuffler component. The next step will be to allow the user to click on the image and by doing so tgrigger a re-shuffle. But that part is for later, first add the image itself. The HTML rendered by the ShufflerRenderer needs to be extended with the IMG tag, that is easy enough. Less trivial is the value for the SRC attribute on the IMG tag. The change in the encodeBegin method in the ShufflerRenderer: writer.startElement("IMG", component); writer.writeAttribute("src", imageUrl( facesContext,SHUFFLE_IMAGE), null); writer.writeAttribute("alt", "Click to reshuffle", null); writer.writeAttribute("width", "20px", null); writer.endElement("IMG"); With SHUFFLE_IMAGE specified as: private static String SHUFFLE_IMAGE = "shuffleIcon.png"; The imageUrl() method is defined as follows private final static String IMAGE_PATH ="/faces/images/"; protected String imageUrl(FacesContext facesContext, String image) { ViewHandler handler = facesContext.getApplication().getViewHandler(); String imageUrl = handler.getResourceURL(facesContext, IMAGE_PATH + image); return imageUrl; } The URLs for images are now constructed to look like this: http://somehost:7101/CustomJSFConsumer/faces/images/shuffleIcon.png The request for the shuffleIcon.png that is sent by the browser should be intercepted by a component that knows how to handle it. Because of the /faces/ part, this request is sent to the FacesServlet and processed through the JSF lifecycle. The componoent to intercept it will be a phaseListener that fires after restore view. It inspects the ViewId. When the ViewId contains the predefined indicator ("/images/") it steps in and takes over processing of the request. It will find the name of the image that is requested by taking the part of the ViewId that comes after /images/. It will then locate the image file on the classpath (that works well for a component packaged in a jar file, it can have the images packaged in the jar file too), looking for a directory called /images/ - as specified by the IMAGE_PATH constant. It copies the image from the file to the outputstream after setting the content type. package nl.amis.jsf; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.net.URL; import java.net.URLConnection; import javax.faces.context.FacesContext; import javax.faces.event.PhaseEvent; import javax.faces.event.PhaseId; import javax.faces.event.PhaseListener; import javax.servlet.ServletContext; import javax.servlet.http.HttpServletResponse; public class ResourceServerPhaseListener implements PhaseListener { public ResourceServerPhaseListener() { super(); } public PhaseId getPhaseId() { return PhaseId.RESTORE_VIEW; } public void afterPhase(PhaseEvent event) { // If this is restoreView phase if (PhaseId.RESTORE_VIEW == event.getPhaseId()) { if (-1 != event.getFacesContext().getViewRoot().getViewId().indexOf(RENDER_IMAGE_TAG)) { // extract the name of the image resource from the ViewId String image = event.getFacesContext().getViewRoot().getViewId().substring(event.getFacesContext() .getViewRoot().getViewId().indexOf(RENDER_IMAGE_TAG) + RENDER_IMAGE_TAG.length()); // render the script writeImage(event, image); event.getFacesContext().responseComplete(); } } } public void beforePhase(PhaseEvent event) { } public static final String RENDER_IMAGE_TAG = "/images/"; public static final String IMAGE_PATH = "/images/"; private void writeImage(PhaseEvent event, String resourceName) { URL url = getClass().getResource(IMAGE_PATH + resourceName); URLConnection conn = null; InputStream stream = null; HttpServletResponse response = (HttpServletResponse)event.getFacesContext().getExternalContext().getResponse(); try { conn = url.openConnection(); conn.setUseCaches(false); stream = conn.getInputStream(); ServletContext servletContext = (ServletContext)FacesContext.getCurrentInstance().getExternalContext().getContext(); String mimeType = servletContext.getMimeType(resourceName); response.setContentType(mimeType); response.setStatus(200); // Copy the contents of the file to the output stream byte[] buf = new byte[1024]; int count = 0; while ((count = stream.read(buf)) >= 0) { response.getOutputStream().write(buf, 0, count); } response.getOutputStream().close(); } catch (Exception e) { String message = null; message = "Can't load image file:" + url.toExternalForm(); try { response.sendError(HttpServletResponse.SC_BAD_REQUEST, message); } catch (IOException f) { f.printStackTrace(); } } } } PhaseListeners need to be configured in order to be active. This configuration usually is done in the faces-config.xml of the application. Fortunately, we can also configure the PhaseListener in the faces-config.xml file that we create for the custom component. This faces-config.xml is part of the jar file in which the custom component is shipped and deployed. Its contents are merged with the application’s own faces-config.xml. The registration of our PhaseListener looks like this: nl.amis.jsf.ResourceServerPhaseListener ... Triggering events on the custom component Time to take another big step. We will support clicking the image by the end user and turn that event into a reshuffle of the facets of the Shuffler component. In the next section we will not only act on that click ourselves, but also publish an event that others can listen to. We will have to add a JavaScript event listener in the HTML rendered for the Shuffler. This client side code is triggered when the image is clicked. It will submit the form - after it has added an input element to the DOM and set a value on it. Note: this approach to have a custom component trigger an event that can be received by the server side renderer class has been described in Pro JSF and Ajax: Building Rich Internet Components - by John R. Fallows and Jonas Jacobi, the guys who first introduced me to JavaServer Faces. The JavaScript for the Shuffler component looks like this: /** * The onclick handler for ShufflerRenderer. * * @param formClientId the clientId of the enclosing UIForm component * @param clientId the clientId of the Shuffler component */ function _shuffle_click( formClientId, clientId) { var form = document.forms[formClientId]; var input = form[clientId]; if (!input) // if the input element does not already exist, create it and add it to the form { input = document.createElement("input"); input.type = 'hidden'; input.name = clientId; form.appendChild(input); } input.value = 'clicked'; form.submit(); } The JavaScript is not be directly included in the page - as it is part of the jar file in which the Shuffler component is shipped. We need a way to attach this JavaScript (it is in a file called shuffle.js) to the page from within the custom component, or in this case rather its Renderer class. We extend the ResourceServerPhaseListener to also handle JavaScript resources, just like it can handle images. public class ResourceServerPhaseListener implements PhaseListener { public static final String RENDER_SCRIPT_TAG = "/js/"; public static final String RENDER_IMAGE_TAG = "/images/"; public static final String SCRIPT_PATH = "/js/"; public static final String IMAGE_PATH = "/images/"; public PhaseId getPhaseId() { return PhaseId.RESTORE_VIEW; } public void afterPhase(PhaseEvent event) { // If this is restoreView phase if (PhaseId.RESTORE_VIEW == event.getPhaseId()) { // if the request is for a JavaScript library if (-1 != event.getFacesContext().getViewRoot().getViewId().indexOf(RENDER_SCRIPT_TAG)) { // extract the name of the script from the ViewId String script = event.getFacesContext().getViewRoot().getViewId().substring(event.getFacesContext() .getViewRoot().getViewId().indexOf(RENDER_SCRIPT_TAG) + RENDER_SCRIPT_TAG.length()); // render the script writeScript(event, script); event.getFacesContext().responseComplete(); } ... image handling, same as before } } public void beforePhase(PhaseEvent event) { } private void writeScript(PhaseEvent event, String resourceName) { URL url = getClass().getResource(SCRIPT_PATH + resourceName); URLConnection conn = null; InputStream stream = null; BufferedReader bufReader = null; HttpServletResponse response = (HttpServletResponse)event.getFacesContext().getExternalContext().getResponse(); OutputStreamWriter outWriter = null; String curLine = null; try { outWriter = new OutputStreamWriter(response.getOutputStream(), response.getCharacterEncoding()); conn = url.openConnection(); conn.setUseCaches(false); stream = conn.getInputStream(); bufReader = new BufferedReader(new InputStreamReader(stream)); response.setContentType("text/javascript"); response.setStatus(200); while (null != (curLine = bufReader.readLine())) { outWriter.write(curLine + "\n"); } outWriter.close(); } catch (Exception e) { String message = null; message = "Can't load script file:" + url.toExternalForm(); try { response.sendError(HttpServletResponse.SC_BAD_REQUEST, message); } catch (IOException f) { f.printStackTrace(); } } } private void writeImage(PhaseEvent event, String resourceName) { ... same as before } } The Renderer class is responsible for rendering the markup that will include the JavaScript resources to the page (the script element). We could have multiple occurrences of our custom component in a page. However, the JavaScript file shuffle.js should be loaded only once, to prevent excessive and completely pointless browser requests. In order to make that happen, the Renderer indicates to a method writeScriptResource that it has a JavaScript resource that should be included. This method verifies whether a script tag for downloading that same resource has already been added in the current request. If so, it will not add another script tag. If not [already included] then the tag is added with its src attribute referring to the proper PhaseListener controlled url: protected void writeScriptResource( FacesContext context, String resourcePath) throws IOException { Set scriptResources = _getScriptResourcesAlreadyWritten(context); // Set.add() returns true only if item was added to the set // and returns false if item was already present in the set if (scriptResources.add(resourcePath)) { ViewHandler handler = context.getApplication().getViewHandler(); String resourceURL = handler.getResourceURL(context, SCRIPT_PATH +resourcePath); ResponseWriter out = context.getResponseWriter(); out.startElement("script", null); out.writeAttribute("type", "text/javascript", null); out.writeAttribute("src", resourceURL, null); out.endElement("script"); } } private Set _getScriptResourcesAlreadyWritten( FacesContext context) { ExternalContext external = context.getExternalContext(); Map requestScope = external.getRequestMap(); Set written = (Set)requestScope.get(_SCRIPT_RESOURCES_KEY); if (written == null) { written = new HashSet(); requestScope.put(_SCRIPT_RESOURCES_KEY, written); } return written; } static private final String _SCRIPT_RESOURCES_KEY = ShufflerRenderer.class.getName() + ".SCRIPTS_WRITTEN"; With these helper methods in place, the ShufflerRenderer can be extended to include the client side click handling code: @Override public void encodeBegin(final FacesContext facesContext, final UIComponent component) throws IOException { super.encodeBegin(facesContext, component); final Map attributes = component.getAttributes(); final ResponseWriter writer = facesContext.getResponseWriter(); String formClientId = _findFormClientId(facesContext, component); String shuffleClientId = component.getClientId(facesContext); writeScriptResource(context, "shuffle.js"); writer.startElement("DIV", component); String styleClass = (String)attributes.get(Shuffler.STYLECLASS_ATTRIBUTE_KEY); writer.writeAttribute("class", styleClass, null); writer.startElement("SPAN", component); writer.writeAttribute("onClick", "_shuffle_click('" + formClientId + "'," + "'" + shuffleClientId + "')", null); writer.startElement("IMG", component); writer.writeAttribute("src", imageUrl( facesContext,SHUFFLE_IMAGE), null); writer.writeAttribute("alt", "Click to reshuffle", null); writer.writeAttribute("width", "20px", null); writer.endElement("IMG"); writer.endElement("SPAN"); List orderedFacets = ((Shuffler)component).getOrderedFacets(facesContext); for (UIComponent facet:orderedFacets) { facet.encodeAll(facesContext); } } protected void encodeResources(FacesContext context, UIComponent component) throws IOException { writeScriptResource(context, "shuffle.js"); } /** * Finds the parent UIForm component client identifier. * * @param context the Faces context * @param component the Faces component * * @return the parent UIForm or RichForm (for usage in ADF) client identifier, if present, otherwise null */ private String _findFormClientId(FacesContext context, UIComponent component) { if (component==null) { return null; } if (component instanceof UIForm || component.getClass().getName().endsWith("RichForm")) { return component.getClientId(context); } else { return _findFormClientId(context, component.getParent()); } } The image is wrapped in a SPAN and the onclick event handler is defined on that SPAN element (this allows us to later on add more clickable stuff to the SPAN). When the image is clicked, the _shuffle_click function is invoked - that was loaded from shuffle.js. The element is added to the form and the form is submitted. The HTML rendered from this renderer now looks like this:
October 7, 2009
by Wouter Van Reeven
· 44,293 Views
article thumbnail
Multithreading and the Java Memory Model
At the New England Software Symposium, I attended Brian Goetz's session called "The Java Memory Model". When I saw the phrase "memory model" in the title I thought it would be about garbage collection, memory allocation and memory types. Instead, it is really about multithreading. The difference is that this presentation focuses on visibility, not locking or atomicity. This is my attempt to summarize his talk. The importance of visibility Visibility here refers to the memory that an executing thread can see once it is written. The big gotcha is that when thread A writes something before thread B reads it, it does not mean thread B will read the correct value. You could ensure that threads A and B are ordered with locking but you can still be in deep doo doo because the memory is not written and read in order, or is read in a partially written state. A big part of this peril comes from the layered memory architecture of modern hardware: multi-CPU, multi-core CPUs, multi-level caches on and off chip etc. Instructions could be executed in parallel or out of order. The memory being written may not even be in RAM at all: it could be on a remote core's register. But the danger could also come from old-fashioned compiler optimizations. One of Brian's examples is the following loop which depends on another thread to set the boolean field asleep: while (!asleep) ++sheep; The compiler may notice that asleep is loop-invariant and optimize its evaluation out of the loop if (!asleep) while (true) ++sleep; The result is an infinite loop. The fix in this case is to use a volatile variable. The Java Memory Model A memory model describes when one thread's actions are guaranteed to be visible to another. The Java memory model (JMM) is quite an achievement: previously, memory models were specific to each processor architecture. A cross-platform memory model takes portability well beyond being able to compile the same source code: you really can run it anywhere. It took until Java 5 (JSR 133) to get the JMM right. The JMM defines a partial ordering on program actions (read/write, lock/unlock, start/join threads) called happens-before. Basically, if action X happens-before Y, then X's results are visible to Y. Within a thread, the order is basically the program order. It's straightforward. But between threads, if you don't use synchronized or volatile, there are no visibility guarantees. As far as visible results go, there is no guarantee that thread A will see them in the order that thread B executes them. Brian even invoked special relativity to describe the disorienting effects of relative views of reality. You need synchronization to get inter-thread visibility guarantees. The basic tools of thread synchronization are: The synchronized keyword: an unlock happens-before every subsequent lock on the same monitor. The volatile keyword: a write to a volatile variable happens-before subsequent reads of that variable. Static initialization: done by the class loader, so the JVM guarantees thread safety In addition to the above, the JMM offers a guarantee of initialization safety for immutable objects. The Rules Here are points that Brian emphasized: If you read or write a field that is read/written by another thread, you must synchronize. This must be done by both the reading and writing threads, and on the same lock. Don't try to reason about ordering in undersynchronized programs. Avoiding synchronization can cause subtle bugs that only blow up in production. Do it right first, then make it fast. Case study: double-checked locking One example of synchronization avoidance gone bad is the popular double-checked locking idiom for lazy initialization, which we now know is broken: private Thing instance = null; public Thing getInstance() { if (instance == null) { synchronized (this) { if (instance == null) instance = new Thing(); } } return instance; } This idiom can result in a partially constructed Thing object, because it only worries about atomicity at the expense of visibility. There are ways to fix this, of course, such as using a volatile field or switching to using static initializers. But it's easy to get it wrong, so Brian questions why we would want to do something like this in the first place. The main motivation was to avoid synchronization in the common case. While it used to be expensive in the past, uncontended synchronization is much cheaper now. There is still a lot of advice to avoid supposedly expensive Java operations out there, but the JVM has improved tremendously and a lot of old performance tips (like object pooling) just don't make sense anymore. Beware of reading years-old advice when you Google for Java tips. Remember Brian's advice above against premature optimization. That said, he also showed a couple of better alternatives for lazy initialization. Some thoughts This talk was a reminder to me that low-level multithreading is hard. It's hard enough that it took years to get the JMM right. It's hard enough that a university professor would say "don't do it". And if you faithfully follow Brian's rules and use synchronization primitives everywhere, you might find yourself vulnerable to thread deadlocks (hmmm ... why does JConsole have a deadlock detection function?). The primary danger in multithreading is in shared, mutable state. Without shared mutable data, threads might as well be separate processes, and the danger evaporates. So while it's wonderful what JMM has done for cross-platform visibility guarantees, I think we would do ourselves a favor if we tried to minimize shared mutable data. There are often higher level alternatives. For example, Scala's Actor construct relies on passing immutable messages instead of sharing memory. From http://chriswongdevblog.blogspot.com/
October 5, 2009
by Christopher Wong
· 46,487 Views
article thumbnail
IntelliJ IDEA Finds Bugs with FindBugs
With the FindBugs plugin you get extra variety in the available tool-set for Static Code Analysis available in IntelliJ IDEA already.
September 25, 2009
by Vaclav Pech
· 61,967 Views · 2 Likes
article thumbnail
A Look Inside JBoss Microcontainer, Part 3 - the Virtual File System
We're finally back with our next article in the Microcontainer series. In the first two articles we demonstrated how Microcontainer supports , and showed its powerful . In this article, we'll explain Classloading and Deployers, but first we must familiarize ourselves with VFS. VFS stands, as expected, for Virtual File System. What does VFS solve for us, or why is it useful? Here, at JBoss, we saw that a lot of similar resource handling code was scattered/duplicated all over the place. In most cases it was code that was trying to determine what type of resource a particular resource was, e.g. is it a file, a directory, or a jar loading resources through URLs. Processing of nested archives was also reimplemented again, and again in different libraries. Read the other parts in DZone's exclusive JBoss Microcontainer Series: Part 4 -- ClassLoading Layer Example: public static URL[] search(ClassLoader cl, String prefix, String suffix) throws IOException { Enumeration[] e = new Enumeration[]{ cl.getResources(prefix), cl.getResources(prefix + "MANIFEST.MF") }; Set all = new LinkedHashSet(); URL url; URLConnection conn; JarFile jarFile; for (int i = 0, s = e.length; i < s; ++i) { while (e[i].hasMoreElements()) { url = (URL)e[i].nextElement(); conn = url.openConnection(); conn.setUseCaches(false); conn.setDefaultUseCaches(false); if (conn instanceof JarURLConnection) { jarFile = ((JarURLConnection)conn).getJarFile(); } else { jarFile = getAlternativeJarFile(url); } if (jarFile != null) { searchJar(cl, all, jarFile, prefix, suffix); } else { boolean searchDone = searchDir(all, new File(URLDecoder.decode(url.getFile(), "UTF-8")), suffix); if (searchDone == false) { searchFromURL(all, prefix, suffix, url); } } } } return (URL[])all.toArray(new URL[all.size()]); } private static boolean searchDir(Set result, File file, String suffix) throws IOException { if (file.exists() && file.isDirectory()) { File[] fc = file.listFiles(); String path; for (int i = 0; i < fc.length; i++) { path = fc[i].getAbsolutePath(); if (fc[i].isDirectory()) { searchDir(result, fc[i], suffix); } else if (path.endsWith(suffix)) { result.add(fc[i].toURL()); } } return true; } return false; } There were also many problems with file locking on Windows systems, which forced us to copy all hot-deployable archives to another location to prevent locking those in deploy folders (which would prevent their deletion and filesystem based undeploy). File locking was a major problem that could only be addressed by centralizing all the resource loading code in one place. Recognizing a need to deal with all of these issues in one place, wrapping it all into a simple and useful API, we created the VFS project. VFS public API Basic usage in VFS can be split in two pieces: simple resource navigation visitor pattern API As mentioned, in plain JDK resource handling navigation over resources is far from trivial. You must always check what kind of resource you're currently handling, and this is very cumbersome. With VFS we wanted to limit this to a single resource type - VirtualFile. public class VirtualFile implements Serializable { /** * Get certificates. * * @return the certificates associated with this virtual file */ Certificate[] getCertificates() /** * Get the simple VF name (X.java) * * @return the simple file name * @throws IllegalStateException if the file is closed */ String getName() /** * Get the VFS relative path name (org/jboss/X.java) * * @return the VFS relative path name * @throws IllegalStateException if the file is closed */ String getPathName() /** * Get the VF URL (file://root/org/jboss/X.java) * * @return the full URL to the VF in the VFS. * @throws MalformedURLException if a url cannot be parsed * @throws URISyntaxException if a uri cannot be parsed * @throws IllegalStateException if the file is closed */ URL toURL() throws MalformedURLException, URISyntaxException /** * Get the VF URI (file://root/org/jboss/X.java) * * @return the full URI to the VF in the VFS. * @throws URISyntaxException if a uri cannot be parsed * @throws IllegalStateException if the file is closed * @throws MalformedURLException for a bad url */ URI toURI() throws MalformedURLException, URISyntaxException /** * When the file was last modified * * @return the last modified time * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ long getLastModified() throws IOException /** * Returns true if the file has been modified since this method was last called * Last modified time is initialized at handler instantiation. * * @return true if modifed, false otherwise * @throws IOException for any error */ boolean hasBeenModified() throws IOException /** * Get the size * * @return the size * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ long getSize() throws IOException /** * Tests whether the underlying implementation file still exists. * @return true if the file exists, false otherwise. * @throws IOException - thrown on failure to detect existence. */ boolean exists() throws IOException /** * Whether it is a simple leaf of the VFS, * i.e. whether it can contain other files * * @return true if a simple file. * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ boolean isLeaf() throws IOException /** * Is the file archive. * * @return true if archive, false otherwise * @throws IOException for any error */ boolean isArchive() throws IOException /** * Whether it is hidden * * @return true when hidden * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ boolean isHidden() throws IOException /** * Access the file contents. * * @return an InputStream for the file contents. * @throws IOException for any error accessing the file system * @throws IllegalStateException if the file is closed */ InputStream openStream() throws IOException /** * Do file cleanup. * * e.g. delete temp files */ void cleanup() /** * Close the file resources (stream, etc.) */ void close() /** * Delete this virtual file * * @return true if file was deleted * @throws IOException if an error occurs */ boolean delete() throws IOException /** * Delete this virtual file * * @param gracePeriod max time to wait for any locks (in milliseconds) * @return true if file was deleted * @throws IOException if an error occurs */ boolean delete(int gracePeriod) throws IOException /** * Get the VFS instance for this virtual file * * @return the VFS * @throws IllegalStateException if the file is closed */ VFS getVFS() /** * Get the parent * * @return the parent or null if there is no parent * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ VirtualFile getParent() throws IOException /** * Get a child * * @param path the path * @return the child or null if not found * @throws IOException for any problem accessing the VFS * @throws IllegalArgumentException if the path is null * @throws IllegalStateException if the file is closed or it is a leaf node */ VirtualFile getChild(String path) throws IOException /** * Get the children * * @return the children * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ List getChildren() throws IOException /** * Get the children * * @param filter to filter the children * @return the children * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed or it is a leaf node */ List getChildren(VirtualFileFilter filter) throws IOException /** * Get all the children recursively * * This always uses {@link VisitorAttributes#RECURSE} * * @return the children * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed */ List getChildrenRecursively() throws IOException /** * Get all the children recursively * * This always uses {@link VisitorAttributes#RECURSE} * * @param filter to filter the children * @return the children * @throws IOException for any problem accessing the virtual file system * @throws IllegalStateException if the file is closed or it is a leaf node */ List getChildrenRecursively(VirtualFileFilter filter) throws IOException /** * Visit the virtual file system * * @param visitor the visitor * @throws IOException for any problem accessing the virtual file system * @throws IllegalArgumentException if the visitor is null * @throws IllegalStateException if the file is closed */ void visit(VirtualFileVisitor visitor) throws IOException } As you can see you have all of the usual read-only File System operations, plus a few options to cleanup or delete the resource. Cleanup or deletion handling is needed when we're dealing with some internal temporary files; e.g. from nested jars handling. To switch from JDK's File or URL resource handling to new VirtualFile we need a root. It is the VFS class that knows how to create one with the help of URL or URI parameter. public class VFS { /** * Get the virtual file system for a root uri * * @param rootURI the root URI * @return the virtual file system * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL is null */ static VFS getVFS(URI rootURI) throws IOException /** * Create new root * * @param rootURI the root url * @return the virtual file * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL */ static VirtualFile createNewRoot(URI rootURI) throws IOException /** * Get the root virtual file * * @param rootURI the root uri * @return the virtual file * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL is null */ static VirtualFile getRoot(URI rootURI) throws IOException /** * Get the virtual file system for a root url * * @param rootURL the root url * @return the virtual file system * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL is null */ static VFS getVFS(URL rootURL) throws IOException /** * Create new root * * @param rootURL the root url * @return the virtual file * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL */ static VirtualFile createNewRoot(URL rootURL) throws IOException /** * Get the root virtual file * * @param rootURL the root url * @return the virtual file * @throws IOException if there is a problem accessing the VFS * @throws IllegalArgumentException if the rootURL */ static VirtualFile getRoot(URL rootURL) throws IOException /** * Get the root file of this VFS * * @return the root * @throws IOException for any problem accessing the VFS */ VirtualFile getRoot() throws IOException } You can see three different methods that look a lot alike - getVFS, createNewRoot and getRoot. Method getVFS returns a VFS instance, and what's important, it doesn't yet create a VirtualFile instance. Why is this important? Because there are methods which help us configure a VFS instance (see VFS class API javadocs), before telling it to create a VirtualFile root. The other two methods, on the other hand, use default settings for root creation. The difference between createNewRoot and getRoot is in caching details, which we'll delve in later on. URL rootURL = ...; // get root url VFS vfs = VFS.getVFS(rootURL); // configure vfs instance VirtualFile root1 = vfs.getRoot(); // or you can get root directly VirtualFile root2 = VFS.crateNewRoot(rootURL); VirtualFile root3 = VFS.getRoot(rootURL); The other useful thing about VFS API is its implementation of a proper visitor pattern. This way it's very simple to recursively gather different resources, something quite impossible to do with plain JDK resource loading. public interface VirtualFileVisitor { /** * Get the search attribues for this visitor * * @return the attributes */ VisitorAttributes getAttributes(); /** * Visit a virtual file * * @param virtualFile the virtual file being visited */ void visit(VirtualFile virtualFile); } VirtualFile root = ...; // get root VirtualFileVisitor visitor = new SuffixVisitor(".class"); // get all classes root.visit(visitor); VFS Architecture While public API is quite intuitive, real implementation details are a bit more complex. We'll try to explain the concepts in a quick pass. Each time you create a VFS instance, its matching VFSContext instance is created. This creation is done via VFSContextFactory. Different protocols map to different VFSContextFactory instances - e.g. file/vfsfile map to FileSystemContextFactory, zip/vfszip map to ZipEntryContextFactory. Also, each time a VirtualFile instance is created, its matching VirtualFileHandler is created. It's this VirtualFileHandler instance that knows how to handle different resource types properly - VirtualFile API just delegates invocations to its VirtualFileHandler reference. As one could expect, VFSContext instance is the one that knows how to create VirtualFileHandler instances accordingly to a resource type - e.g. ZipEntryContextFactory creates ZipEntryContext, which then creates ZipEntryHandler. Existing implementations Apart from files, directories (FileHandler) and zip archives (ZipEntryHandler) we also support other more exotic usages. The first one is Assembled, which is similar to what Eclipse calls Linked Resources. Its idea is to take existing resources from different trees, and "mock" them into single resource tree. AssembledDirectory sar = AssembledContextFactory.getInstance().create("assembled.sar"); URL url = getResource("/vfs/test/jar1.jar"); VirtualFile jar1 = VFS.getRoot(url); sar.addChild(jar1); url = getResource("/tmp/app/ext.jar"); VirtualFile ext1 = VFS.getRoot(url); sar.addChild(ext); AssembledDirectory metainf = sar.mkdir("META-INF"); url = getResource("/config/jboss-service.xml"); VirtualFile serviceVF = VFS.getRoot(url); metainf.addChild(serviceVF); AssembledDirectory app = sar.mkdir("app.jar"); url = getResource("/app/someapp/classes"); VirtualFile appVF = VFS.getRoot(url); app.addPath(appVF, new SuffixFilter(".class")); Another implementation is in-memory files. In our case this came out of a need to easily handle AOP generated bytes. Instead of mucking around with temporary files, we simply drop bytes into in-memory VirtualFileHandlers. URL url = new URL("vfsmemory://aopdomain/org/acme/test/Test.class"); byte[] bytes = ...; // some AOP generated class bytes MemoryFileFactory.putFile(url, bytes); VirtualFile classFile = VFS.getVirtualFile(new URL("vfsmemory://aopdomain"), "org/acme/test/Test.class"); InputStream bis = classFile.openStream(); // e.g. load class from input stream Extension hooks It's quite easy to extend VFS with a new protocol, similar to what we've done with Assembled and Memory. All you need is a combination of VFSContexFactory, VFSContext, VirtualFileHandler, FileHandlerPlugin and URLStreamHandler implementations. The first one is trivial, while the others depend on the complexity of your task - e.g. you could implement rar, tar, gzip or even remote access. In the end you simply register this new VFSContextFactory with VFSContextFactoryLocator. See this article's demo for a simple gzip example Features One of the first major problems we stumbled upon was proper usage of nested resources, more exactly nested jar files. e.g. normal ear deployments: gema.ear/ui.war/WEB-INF/lib/struts.jar In order to read contents of struts.jar we have two options: handle resources in memory create top level temporary copies of nested jars, recursively The first option is easier to implement, but it's very memory-consuming--just imagine huge apps in memory. The other approach leaves a bunch of temporary files, which should be invisible to plain user. Hence expecting them to disappear once the deployment is undeployed. Now imagine the following scenario: A user gets a hold of VFS's URL instance, which points to some nested resource. The way plain VFS would handle this is to re-create the whole path from scratch, meaning it would unpack nested resources over and over again. This would (and it did) lead to a huge pile of temporary files. How to avoid this? The way we approached this is by using VFSRegistry, VFSCache and TempInfo. When you ask for VirtualFile over VFS (getRoot, not createNewRoot), VFS asks VFSRegistry implementation to provide the file. Existing DefaultVFSRegistry first checks if matching root VFSContext for provided URI exists. If it does, it first tries to navigate to existing TempInfo (link to temporary files), falling back to regular navigation if no such temporary file exists. This way we completely re-use any already unpacked temporary files, saving time and disk space. If no matching VFSContext is found in cache, we create a new VFSCache entry, and continue with default navigation. It's then up to VFSCache implementation used, how it handles cached VFSContext entries. VFSCache is configurable via VFSCacheFactory - by default we don't cache anything, but there are a few useful existing VFSCache implementations, ranging from LRU to timed cache. API Use case There is a class called VFSUtils which is part of a public API, and it is sort of a dumping ground of useful functionality. It contains a bunch of helpful methods and configuration settings (system property keys, actually). Check the API javadocs for more details. Existing issues / workarounds Another issue that came up - expectedly - was inability of some frameworks to properly work on top of VFS. The problem lied in custom VFS urls like: vfsfile, vfszip, vfsmemory. In most cases you could still work around it with plain URL or URLConnection usage, but a lot of frameworks do a strict match on file or jar protocol, which of course fails. We were able to patch some frameworks (e.g. Facelets) and provide extensions to others (e.g. Spring). If you are a library developer, and your library has a simple pluggable resource loading mechanism, then we suggest you simply extend it with VFS based implementation. If there are no hooks, try to limit your assumptions to more general usage based on URL or URLConnection. Conclusion While VFS is very nice to use, it comes at a price. It adds additional layer on top of JDK's resource handling, meaning extra invocations are always present when you're dealing with resources. We also keep some of the jar handling info in memory to make it easy to get hold of a specific resource, but at the expense of some extra memory consumption. Overall VFS proved to be a very useful library as it hides away many use cases that are painful with plain JDK, and provides a comprehensive API for working with resources - i.e. visitor pattern implementation. We're constantly following user feedback to VFS issues they encounter, making each version a bit better. Now, that we got to know VFS, it's time we move on to MC's new Classloading layer! About the Author Ales Justin was born in Ljubljana, Slovenia and graduated with a degree in mathematics from the University of Ljubljana. He fell in love with Java seven years ago and has spent most of his time developing information systems, ranging from customer service to energy management. He joined JBoss in 2006 to work full time on the Microcontainer project, currently serving as its lead. He also contributes to JBoss AS and is Seam and Spring integration specialist. He represent JBoss on 'JSR-291 Dynamic Component Support for Java SE' and 'OSGi' expert groups.
September 24, 2009
by Ales Justin
· 42,835 Views
article thumbnail
Sorting Collections in Hibernate Using SQL in @OrderBy
When you have collections of associated objects in domain objects, you generally want to specify some kind of default sort order. For example, suppose I have domain objects Timeline and Event: @Entity class Timeline { @Required String description @OneToMany(mappedBy = "timeline") @javax.persistence.OrderBy("startYear, endYear") Set events } @Entity class Event { @Required Integer startYear Integer endYear @Required String description @ManyToOne Timeline timeline } In the above example I've used the standard JPA (Java Persistence API) @OrderBy annotation which allows you to specify the order of a collection of objects via object properties, in this example a @OneToMany association . I'm ordering first by startYear in ascending order and then by endYear, also in ascending order. This is all well and good, but note that I've specified that only the start year is required. (The @Required annotation is a custom Hibernate Validator annotation which does exactly what you would expect.) How are the events ordered when you have several events that start in the same year but some of them have no end year? The answer is that it depends on how your database sorts null values by default. Under Oracle 10g nulls will come last. For example if two events both start in 2001 and one of them has no end year, here is how they are ordered: 2001 2002 Some event 2001 2003 Other event 2001 Event with no end year What if you want to control how null values are ordered so they come first rather than last? In Hibernate there are several ways you could do this. First, you could use the Hibernate-specific @Sort annotation to perform in-memory (i.e. not in the database) sorting, using natural sorting or sorting using a Comparator you supply. For example, assume I have an EventComparator helper class that implements Comparator. I could change Timeline's collection of events to look like this: @OneToMany(mappedBy = "timeline") @org.hibernate.annotations.Sort(type = SortType.COMPARATOR, comparator = EventCompator) Set events Using @Sort will perform sorting in-memory once the collection has been retrieved from the database. While you can certainly do this and implement arbitrarily complex sorting logic, it's probably better to sort in the database when you can. So we now need to turn to Hibernate's @OrderBy annotation, which lets you specify a SQL fragment describing how to perform the sort. For example, you can change the events mapping to : @OneToMany(mappedBy = "timeline") @org.hibernate.annotations.OrderBy("start_year, end_year") Set events This sort order is the same as using the JPA @OrderBy with "startYear, endYear" sort order. But since you write actual SQL in Hibernate's @OrderBy you can take advantage of whatever features your database has, at the possible expense of portability across databases. As an example, Oracle 10g supports using a syntax like "order by start_year, end_year nulls first" to order null end years before non-null end years. You could also say "order by start_year, end year nulls last" which sorts null end years last as you would expect. This syntax is probably not portable, so another trick you can use is the NVL function, which is supported in a bunch of databases. You can rewrite Timeline's collection of events like so: @OneToMany(mappedBy = "timeline") @org.hibernate.annotations.OrderBy("start_year, nvl(end_year , start_year)") Set events The expression "nvl(end_year , start_year)" simply says to use end_year as the sort value if it is not null, and start_year if it is null. So for sorting purposes you end up treating end_year as the same as the start_year if end_year is null. In the contrived example earlier, applying the nvl-based sort using Hibernate's @OrderBy to specify SQL sorting criteria, you now end with the events sorted like this: 2001 Event with no end year 2001 2002 Some event 2001 2003 Other event Which is what you wanted in the first place. So if you need more complex sorting logic than what you can get out of the standard JPA @javax.persistence.OrderBy, try one of the Hibernate sorting options, either @org.hibernate.annotations.Sort or @org.hibernate.annotations.OrderBy. Adding a SQL fragment into your domain class isn't necessarily the most elegant thing in the world, but it might be the most pragmatic thing.
September 16, 2009
by Scott Leberknight
· 101,825 Views
article thumbnail
Calcular A Idade Em SQL
Faz o cálculo da idade de uma pessoa utilizando sql oracle Select Trunc ( (SYSDATE - to_date('14/07/1980','dd/mm/yyyy')) /365, 0 ) as "age" from Dual
September 13, 2009
by Erico Marineli
· 6,081 Views
article thumbnail
Managing Eclipse RCP Launch Arguments
in my last post i discussed how to best manage run configurations for eclipse rcp applications . but there was one related topic i wanted to discuss in more detail, and that is how to manage launch arguments. what are launch arguments? launch arguments are arguments that are added to the command line when you execute your application. these arguments come in two flavors: program arguments – arguments that are eclipse-specific. for example, the -clean argument will clear the configuration area on startup. vm arguments – arguments that make sense to the java vm. for example, the -xmx argument allows you to set the maximum heap size for the vm. both of these argument types can be set on the arguments tab in the run configurations dialog. launch arguments and the target platform we oftentimes want to apply the same launch arguments to all of our run configurations, and one way to handle that is to specify them on your target platform . on the target platform preference page there is a section where you can add whatever arguments you wish. the arguments associated with a target platform will be added to run configurations generated from the manifest editor . they will not be added to configurations generated by the product configuration editor. also, because the manifest editor link does not regenerate a configuration each time, you will need to explicitly delete a configuration if you want to recreate it using new target platform arguments. launch arguments and products a second way to manage arguments is to add them using the launching tab of the product configuration editor. when you add arguments in this way, two things will happen: the arguments will be added to your run configurations if you launch using the link in the product configuration editor . because this link regenerates the run configuration each time, consistent use of the link guarantees that your configuration is in synch with your product definition. the arguments will also be added to your deployed application in the form of an ini file. this is a nice feature, but it means that you need to be careful when adding arguments that are only useful during development. for example, you may want to use -clean to clear the configuration area when you’re developing, but you probably do not want to ship this argument to your customers. launch arguments best practices my approach is to add arguments using the product configuration editor and to always launch my applications using the link in that editor. this guarantees that my run configurations are always in synch with my product definition. i also take care to not add arguments that would be detrimental to a deployed application. some, such as -consolelog, i consider harmless in a deployed app and i just leave those in. if for some reason i absolutely have to add an argument that should not be deployed, i usually clean it out of the ini file during the build process. it’s pretty rare for me to have to do this, though. from http://www.modumind.com
September 9, 2009
by Patrick Paulin
· 10,473 Views
article thumbnail
Java Performance Tuning, Profiling, and Memory Management
Get a perspective on the aspects of JVM internals, controls, and switches that can be used to optimize your Java application.
September 1, 2009
by Vikash Ranjan
· 256,456 Views · 17 Likes
article thumbnail
JPA Performance, Don't Ignore the Database
Good Database schema design is important for performance. One of the most basic optimizations is to design your tables to take as little space on the disk as possible , this makes disk reads faster and uses less memory for query processing. Data Types You should use the smallest data types possible, especially for indexed fields. The smaller your data types, the more indexes (and data) can fit into a block of memory, the faster your queries will be. Normalization Database Normalization eliminates redundant data, which usually makes updates faster since there is less data to change. However a Normalized schema causes joins for queries, which makes queries slower, denormalization speeds retrieval. More normalized schemas are better for applications involving many transactions, less normalized are better for reporting types of applications. You should normalize your schema first, then de-normalize later. Applications often need to mix the approaches, for example use a partially normalized schema, and duplicate, or cache, selected columns from one table in another table. With JPA O/R mapping you can use the @Embedded annotation for denormalized columns to specify a persistent field whose @Embeddable type can be stored as an intrinsic part of the owning entity and share the identity of the entity. Database Normalization and Mapping Inheritance Hiearchies The Class Inheritance hierarchy shown below will be used as an example of JPA O/R mapping. In the Single table per class mapping shown below, all classes in the hierarchy are mapped to a single table in the database. This table has a discriminator column (mapped by @DiscriminatorColumn), which identifies the subclass. Advantages: This is fast for querying, no joins are required. Disadvantages: wastage of space since all inherited fields are in every row, a deep inheritance hierarchy will result in wide tables with many, some empty columns. In the Joined Subclass mapping shown below, the root of the class hierarchy is represented by a single table, and each subclass has a separate table that only contains those fields specific to that subclass. This is normalized (eliminates redundant data) which is better for storage and updates. However queries cause joins which makes queries slower especially for deep hierachies, polymorphic queries and relationships. In the Table per Class mapping (in JPA 2.0, optional in JPA 1.0), every concrete class is mapped to a table in the database and all the inherited state is repeated in that table. This is not normlalized, inherited data is repeated which wastes space. Queries for Entities of the same type are fast, however polymorphic queries cause unions which are slower. Know what SQL is executed You need to understand the SQL queries your application makes and evaluate their performance. Its a good idea to enable SQL logging, then go through a use case scenario to check the executed SQL. Logging is not part of the JPA specification, With EclipseLink you can enable logging of SQL by setting the following property in the persistence.xml file: With Hibernate you set the following property in the persistence.xml file: Basically you want to make your queries access less data, is your application retrieving more data than it needs, are queries accessing too many rows or columns? Is the database query analyzing more rows than it needs? Watch out for the following: queries which execute too often to retrieve needed data retrieving more data than needed queries which are too slow you can use EXPLAIN to see where you should add indexes With MySQL you can use the slow query log to see which queries are executing slowly, or you can use the MySQL query analyzer to see slow queries, query execution counts, and results of EXPLAIN statements. Understanding EXPLAIN For slow queries, you can precede a SELECT statement with the keyword EXPLAIN to get information about the query execution plan, which explains how it would process the SELECT, including information about how tables are joined and in which order. This helps find missing indexes early in the development process. You should index columns that are frequently used in Query WHERE, GROUP BY clauses, and columns frequently used in joins, but be aware that indexes can slow down inserts and updates. Lazy Loading and JPA With JPA many-to-one and many-to-many relationships lazy load by default, meaning they will be loaded when the entity in the relationship is accessed. Lazy loading is usually good, but if you need to access all of the "many" objects in a relationship, it will cause n+1 selects where n is the number of "many" objects. You can change the relationship to be loaded eagerly as follows : However you should be careful with eager loading which could cause SELECT statements that fetch too much data. It can cause a Cartesian product if you eagerly load entities with several related collections. If you want to override the LAZY fetch type for specific use cases, you can use Fetch Join. For example this query would eagerly load the employee addresses: In General you should lazily load relationships, test your use case scenarios, check the SQL log, and use @NameQueries with JOIN FETCH to eagerly load when needed. Partitioning The main goal of partitioning is to reduce the amount of data read for particular SQL operations so that the overall response time is reduced Vertical Partitioning splits tables with many columns into multiple tables with fewer columns, so that only certain columns are included in a particular dataset, with each partition including all rows. Horizontal Partitioning segments table rows so that distinct groups of physical row-based datasets are formed. All columns defined to a table are found in each set of partitions. An example of horizontal partitioning might be a table that contains historical data being partitioned by date. Vertical Partitioning In the example of vertical partitioning below a table that contains a number of very wide text or BLOB columns that aren't referenced often is split into two tables with the most referenced columns in one table and the seldom-referenced text or BLOB columns in another. By removing the large data columns from the table, you get a faster query response time for the more frequently accessed Customer data. Wide tables can slow down queries, so you should always ensure that all columns defined to a table are actually needed. The example below shows the JPA mapping for the tables above. The Customer data table with the more frequently accessed and smaller data types is mapped to the Customer Entity, the CustomerInfo table with the less frequently accessed and larger data types is mapped to the CustomerInfo Entity with a lazily loaded one to one relationship to the Customer. Horizontal Partitioning The major forms of horizontal partitioning are by Range, Hash, Hash Key, List, and Composite. Horizontal partitioning can make queries faster because the query optimizer knows what partitions contain the data that will satisfy a particular query and will access only those necessary partitions during query execution. Horizontal Partitioning works best for large database Applications that contain a lot of query activity that targets specific ranges of database tables. Hibernate Shards Partitioning data horizontally into "Shards" is used by google, linkedin, and others to give extreme scalability for very large amounts of data. eBay "shards" data horizontally along its primary access path. Hibernate Shards is a framework that is designed to encapsulate support for horizontal partitioning into the Hibernate Core. Caching JPA Level 2 caching avoids database access for already loaded entities, this makes reading frequently accessed unmodified entities faster, however it can give bad scalability for frequent or concurrently updated entities. You should configure L2 caching for entities that are: read often modified infrequently Not critical if stale You should also configure L2 (vendor specific) caching for maxElements, time to expire, refresh... References and More Information: JPA Best Practices presentation MySQL for Developers Article MySQL for developers presentation MySQL for developers screencast Keeping a Relational Perspective for Optimizing Java Persistence Java Persistence with Hibernate Pro EJB 3: Java Persistence API Java Persistence API 2.0: What's New ? High Performance MySQL book Pro MySQL, Chapter 6: Benchmarking and Profiling EJB 3 in Action sharding the hibernate way JPA Caching Best Practices for Large-Scale Web Sites: Lessons from eBay
August 31, 2009
by Carol McDonald
· 41,361 Views · 1 Like
article thumbnail
JPA Implementation Patterns: Lazy Loading
Model your complete database with all its relations with this JPA pattern for lazy loading.
August 19, 2009
by Vincent Partington
· 119,793 Views · 6 Likes
article thumbnail
Spring Integration: A Hands-On Tutorial, Part 1
This tutorial is the first in a two-part series on Spring Integration. In this series we're going to build out a lead management system based on a message bus that we implement using Spring Integration. Our first tutorial will begin with a brief overview of Spring Integration and also just a bit about the lead management domain. After that we'll build our message bus. The second tutorial continues where the first leaves off and builds the rest of the bus. I’ve written the sample code for this tutorial as a Maven 2 project. I’m using Java 5, Spring Integration 1.0.3 and Spring 2.5.6. The code also works for Java 6. I've used Maven profiles to isolate the dependencies you’ll need if you’re running Java 5. The tutorials assume that you're comfortable with JEE, the core Spring framework and Maven 2. Also, Eclipse users may find the m2eclipse plug-in helpful. To complete the tutorial you'll need an IMAP account, and you'll also need access to an SMTP server. Let's begin with an overview of Spring Integration. A bird's eye view of Spring Integration Spring Integration is a framework for implementing a dynamically configurable service integration tier. The point of this tier is to orchestrate independent services into meaningful business solutions in a loosely-coupled fashion, which makes it easy to rearrange things in the face of changing business needs. The service integration tier sits just above the service tier as shown in figure 1. Following the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf (Addison-Wesley), Spring Integration adopts the well-known pipes and filters architectural style as its approach to building the service integration layer. Abstractly, filters are information-processing units (any type of processing—doesn’t have to be information filtering per se), and pipes are the conduits between filters. In the context of integration, the network we’re building is a messaging infrastructure—a so-called message bus—and the pipes and filters and called message channels and message endpoints, respectively. The network carries messages from one endpoint to another via channels, and the message is validated, routed, split, aggregated, resequenced, reformatted, transformed and so forth as the different endpoints process it. Figure 1. The service integration tier orchestrates the services below it. That should give you enough technical context to work through the tutorial. Let’s talk about the problem domain for our sample integration, which is enrollment lead management in an online university setting. Lead management overview In many industries, such as the mortgage industry and for-profit education, one important component of customer relationship management (CRM) is managing sales leads. This is a fertile area for enterprise integration because there are typically multiple systems that need to play nicely together in order to pull the whole thing off. Examples include front-end marketing/lead generation websites, external lead vendor systems, intake channels for submitted leads, lead databases, e-mail systems (e.g., to accept leads, to send confirmation e-mails), lead qualification systems, sales systems and potentially others. This tutorial and the next use Spring Integration to integrate several of systems of the kind just mentioned into an overall lead management capability for a hypothetical online university. Specifically we’ll integrate the following: • a CRM system that allows campus and call center staff to create leads directly, as they might do for walk-in or phone-in leads • a Request For Information (RFI) form on a lead generation ("lead gen") marketing website • a legacy e-mail based RFI channel • an external CRM that the international enrollment staff uses to process international leads • confirmation e-mails Figure 2 shows what it will look like when we’re done with both tutorials. For now focus on the big picture rather than the details. Figure 2. This is the lead management system we'll build. For this first tutorial we're simply going to establish the base staff interface, the (dummy) backend service that saves leads to a database, and confirmation e-mails. The second tutorial will deal with lead routing, web-based RFIs and e-mail-based RFIs. Let's dive in. We’ll begin with the basic lead creation page in the CRM and expand out from there. Building the core components [You can download the source code for this section of the tutorial here] We’re going to start by creating a lead creation HTML form for campus and call center staff. That way, if walk-in or phone-in leads express an interest, we can get them into the system. This is something that might appear as a part of a lead management module in a CRM system, as shown in figure 3. Figure 3. We'll build our lead management module with integration in mind from the beginning. Because we’re interested in the integration rather than the actual app features, we’re not really going to save the lead to the database. Instead we’ll just call a createLead() method against a local LeadService bean and leave it at that. But we will use Spring Integration to move the lead from the form to the service bean. Our first stop will be the domain model. DZone readers get 30% off Spring in Practice by Willie Wheeler and John Wheeler. Use code dzone30 when checking out with any version of the book at www.manning.com. Create the domain model We’ll need a domain object for leads, so listing 1 shows the one we’ll use. It’s not an industrial-strength representation, but it will do for the purposes of the tutorial. Listing 1. Lead.java, a basic domain object for leads. package crm.model;... other imports ...public class Lead { private static DateFormat dateFormat = new SimpleDateFormat(); private String firstName; private String middleInitial; private String lastName; private String address1; private String address2; ... other fields ... public Lead() { } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } ... other getters and setters, and a toString() method ...} There is nothing special happening here at all. So far the Lead class is just a bunch of getters and setters. You can see the full code listing in the download. If you thought that was underwhelming, just wait until you see the LeadServiceImpl service bean in listing 2. Listing 2. LeadServiceImpl.java, a dummy service bean. package crm.service;import java.util.logging.Logger;import org.springframework.stereotype.Service;import crm.model.Lead;@Service("leadService")public class LeadServiceImpl implements LeadService { private static Logger log = Logger.getLogger("global"); public void createLead(Lead lead) { log.info("Creating lead: " + lead); } This is just a dummy bean. In real life we’d save the lead to a database. The bean implements a basic LeadService interface that we've suppressed here, but it's available in the code download. Now that we have our domain model, let’s use Spring Integration to create a service integration tier above it. Create the service integration tier If you look back at figure 3, you’ll see that the CRM app pushes lead data to the service bean by way of a channel called newLeadChannel. While it’s possible for the CRM app to push messages onto the channel directly, it’s generally more desirable to keep the systems you’re integrating decoupled from the underlying messaging infrastructure, such as channels. That allows you to configure service orchestrations dynamically instead of having to go into the code. Spring Integration supports the Gateway pattern (described in the aforementioned Enterprise Integration Patterns book), which allows an application to push messages onto the message bus without knowing anything about the messaging infrastructure. Listing 3 shows how we do this. Listing 3. LeadGateway.java, a gateway offering access to the messaging system. package crm.integration.gateways;import org.springframework.integration.annotation.Gateway;import crm.model.Lead;public interface LeadGateway { @Gateway(requestChannel = "newLeadChannel") void createLead(Lead lead);} We are of course using the Spring Integration @Gateway annotation to map the method call to the newLeadChannel, but gateway clients don’t know that. Spring Integration will use this interface to create a dynamic proxy that accepts a Lead instance, wraps it with an org.springframework.integration.core.Message, and then pushes the Message onto the newLeadChannel. The Lead instance is the Message body, or payload, and Spring Integration wraps the Lead because only Messages are allowed on the bus. We need to wire up our message bus. Figure 4 shows how to do that with an application context configuration file. Listing 4. /WEB-INF/applicationContext-integration.xml message bus definition. The first thing to notice here is that we've made the Spring Integration namespace our default namespace instead of the standard beans namespace. The reason is that we're using this configuration file strictly for Spring Integration configuration, so we can save some keystrokes by selecting the appropriate namespace. This works pretty nicely for some of the other Spring projects as well, such as Spring Batch and Spring Security. In this configuration we've created the three messaging components that we saw in figure 3. First, we have an incoming lead gateway to allow applications to push leads onto the bus. We simply reference the interface from listing 3; Spring Integration takes care of the dynamic proxy. Next we create a publish/subscribe ("pub-sub") channel called newLeadChannel. This is the channel that the @Gateway annotation referenced in listing 3. A pub-sub channel can publish a message to multiple endpoints simultaneously. For now we have only one subscriber—a service activator—but we already know we're going to have others, so we may as well make this a pub-sub channel. The service activator is an endpoint that allows us to bring our LeadServiceImpl service bean onto the bus. We're injecting the newLeadChannel into the input end of the service activator. When a message appears on the newLeadChannel, the service activator will pass its Lead payload to the leadService bean's createLead() method. Stepping back, we've almost implemented the design described by figure 3. The only part that remains is the lead creation frontend, which we'll address right now. Create the web tier Our user interface for creating new leads will be a web-based form that we implement using Spring Web MVC. The idea is that enrollment staff at campuses or call centers might use such an interface to handle walk-in or phone-in traffic. Listing 5 shows our simple @Controller. Listing 5. LeadController.java, a @Controller to allow staff to create leads package crm.web;import java.util.Date;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Controller;import org.springframework.ui.Model;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import crm.integration.gateways.LeadGateway;import crm.model.Country;import crm.model.Lead;@Controllerpublic class LeadController { @Autowired private LeadGateway leadGateway; @RequestMapping(value = "/lead/form.html", method = RequestMethod.GET) public void getForm(Model model) { model.addAttribute(Country.getCountries()); model.addAttribute(new Lead()); } @RequestMapping(value = "/lead/form.html", method = RequestMethod.POST) public String postForm(Lead lead) { lead.setDateCreated(new Date()); leadGateway.createLead(lead); return "redirect:form.html?created=true"; } This isn't an industrial-strength controller as it doesn't do HTTP parameter whitelisting (for example, via an @InitBinder method) and form validation, both of which you would expect from a real implementation. But the main pieces from a Spring Integration perspective are here. We're autowiring the gateway into the @Controller, and we have methods for serving up the empty form and for processing the submitted form. The getForm() method references a Countries class that we've suppressed (it's in the code download); it just puts a list of countries on the model so the form can present a Country field to the staff member. The postForm() method invokes the createLead() method on the gateway. This will pass the Lead to the dynamic proxy LeadGateway implementation, which in turn will wrap the Lead with a Message and then place the Message on the newLeadChannel. There are a few other configuration files you will need to put in place, including web.xml, main-servlet.xml and applicationContext.xml. There's also a JSP for the web form. As none of these relates directly to Spring Integration, we won't treat them here. Please see the code download for details. With that, we've established a baseline system. To try it out, run mvn jetty:run against crm/pom.xml and point your browser at http://localhost:8080/crm/main/lead/form.html You should see a very basic-looking web form for entering lead information. Enter some user information (it doesn't matter what you enter—recall that we don't have any form validation) and press Submit. The console should report that LeadServiceImpl.createLead() created a lead. Congratulations! Even though we now have a working system, it isn't very interesting. From here on out (this tutorial and the next) we'll be adding some common features to make the lead management system more capable. Our first addition will be confirmation e-mails; the next tutorial will present further additions. Adding confirmation e-mails [The source for this section is available here] After an enrollment advisor (or some other staff member) creates a lead in the system, we want to send the lead an e-mail letting him know that that's happened. Actually—and this is a critical point—we really don't care how the lead was created. Anytime a lead appears on the newLeadChannel, we want to fire off a confirmation e-mail. I'm making the distinction because it points to an important aspect of the message bus: it allows us to control lead processing code centrally instead of having to chase it down in a bunch of different places. Right now there's only one way to create leads, but figure 2 revealed that we'll be adding others. No matter how many we add, they'll all result in sending a confirmation e-mail out to the lead. Figure 4 shows the new bit of plumbing we're going to add to our message bus. Figure 4. Send a confirmation e-mail when creating a lead. To do this, we're going to need to make a few changes to the configuration and code. POM changes First we need to update the POM. Here's a summary of the changes; see the code download for details: • Add a JavaMail dependency to the Jetty plug-in. • Add an org.springframework.context.support dependency. • Add a spring-integration-mail dependency. • Set the mail.version property. These changes will allow us to use JavaMail. Expose JavaMail sessions through JNDI We'll also need to add a /WEB-INF/jetty-env.xml configuration to make our JavaMail sessions available via JNDI. Once again, see the code download for details. I've included a /WEB-INF/jetty-env.xml.sample configuration for your convenience. As mentioned previously, you'll need access to an SMTP server. Besides creating jetty-env.xml, we'll need to update applicationContext.xml. Listing 6 shows the changes we need so we can use JavaMail and SMTP. Listing 6. /WEB-INF/applicationContext.xml changes supporting JavaMail and SMTP The changes expose JavaMail sessions as a JNDI resource. We've declared the jee namespace and its schema location, configured the JNDI lookup, and created a JavaMailSenderImpl bean that we'll use for sending mail. We won't need any domain model changes to generate confirmation e-mails. We will however need to create a bean to back our new transformer endpoint. Service integration tier changes First, recall from figure 4 that the newLeadChannel feeds into a LeadToEmailTransformer endpoint. This endpoint takes a lead as an input and generates a confirmation e-mail as an output, and the e-mail gets pipes out to an SMTP transport. In general, transformers transform given inputs into desired outputs. No surprises there. Figure 4 is slightly misleading since it's actually the POJO itself that we're going to call LeadToEmailTransformer; the endpoint is really just a bean adapter that the messaging infrastructure provides so we can place the POJO on the message bus. Listing 7 presents the LeadToEmailTransformer POJO. Listing 7. LeadToEmailTransformer.java, a POJO to generate confirmation e-mails package crm.integration.transformers;import java.util.Date;import java.util.logging.Logger;import org.springframework.integration.annotation.Transformer;import org.springframework.mail.MailMessage;import org.springframework.mail.SimpleMailMessage;import crm.model.Lead;public class LeadToEmailTransformer { private static Logger log = Logger.getLogger("global"); private String confFrom; private String confSubj; private String confText; ... getters and setters for the fields ... @Transformer public MailMessage transform(Lead lead) { log.info("Transforming lead to confirmation e-mail: " + lead); String leadFullName = lead.getFullName(); String leadEmail = lead.getEmail(); MailMessage msg = new SimpleMailMessage(); msg.setTo(leadFullName == null ? leadEmail : leadFullName + " <" + leadEmail + ">"); msg.setFrom(confFrom); msg.setSubject(confSubj); msg.setSentDate(new Date()); msg.setText(confText); log.info("Transformed lead to confirmation e-mail: " + msg); return msg; } Again, LeadToEmailTransformer is a POJO, so we use the @Transformer annotation to select the method that's performing the transformation. We use a Lead for the input and a MailMessage for the output, and perform a simple transformation in between. When defining backing beans for the various Spring Integration filters, it's possible to specify a Message as an input or an output. That is, if we want to deal with the messages themselves rather than their payloads, we can do that. (Don't confuse the MailMessage in listing 7 with a Spring Integration message; MailMessage represents an e-mail message, not a message bus message.) We might do that in cases where we want to read or manipulate message headers. In this tutorial we don't need to do that, so our backing beans just deal with payloads. Now we'll need to build out our message bus so that it looks like figure 4. We do this by updating applicationContext-integration.xml as shown in listing 8. Listing 8. /WEB-INF/applicationContext-integration.xml updates to support confirmation e-mails The property-placeholder configuration loads the various ${...} properties from a properties file; see /crm/src/main/resources/applicationContext.properties in the code download. You don't have to change anything in the properties file. The transformer configuration brings the LeadToEmailTransformer bean into the picture so it can transform Leads that appear on the newLeadChannel into MailMessages that it puts on the confEmailChannel. As a side note, the p namespace way of specifying bean properties doesn't seem to work here (I assume it's a bug: http://jira.springframework.org/browse/SPR-5990), so I just did it the more verbose way. The channel definition defines a point-to-point channel rather than a pub-sub channel. That means that only one endpoint can pull messages from the channel. Finally we have an outbound-channel-adapter that grabs MailMessages from the confEmailChannel and then sends them using the referenced mailSender, which we defined in listing 6. That's it for this section. We should have working confirmation e-mails. Restart your Jetty instance and go again to http://localhost:8080/crm/main/lead/form.html Fill it out and provide your real e-mail address in the e-mail field. A few moments after submitting the form you should receive a confirmation e-mail. If you don't see it, you might check your SMTP configuration in jetty-env.xml, or else check your spam folder. Summary In this tutorial we've taken our first steps toward developing an integrated lead management system. Though the current bus configuration is simple, we've already seen some key Spring Integration features, including • support for the Gateway pattern, allowing us to connect apps to the message bus without knowing about messages • point-to-point and pub-sub channels • service activators to allow us to place service beans on the bus • message transformers • outbound SMTP channel adapters to allow us to send e-mail The second tutorial will continue elaborating what we've developed here, demonstrating the use of several additional Spring Integration features, including • message routers (including content-based message routers) • outbound web service gateways for sending SOAP messages • inbound HTTP adapters for collecting HTML form data from external systems • inbound e-mail channel adapters (we'll use IMAP IDLE, though POP and IMAP are also possible) for processing incoming e-mails Enjoy, and stay tuned. Willie is a solutions architect with 12 years of Java development experience. He and his brother John are coauthors of the upcoming book Spring in Practice by Manning Publications (www.manning.com/wheeler/). Willie also publishes technical articles (including many on Spring) to wheelersoftware.com/articles/.
August 18, 2009
by Willie Wheeler
· 248,927 Views · 3 Likes
article thumbnail
Urlencode/urldecode As MySQL Stored Functions
DELIMITER ; DROP FUNCTION IF EXISTS multiurldecode; DELIMITER | CREATE FUNCTION multiurldecode (s VARCHAR(4096)) RETURNS VARCHAR(4096) DETERMINISTIC CONTAINS SQL BEGIN DECLARE pr VARCHAR(4096) DEFAULT ''; IF ISNULL(s) THEN RETURN NULL; END IF; REPEAT SET pr = s; SELECT urldecode(s) INTO s; UNTIL pr = s END REPEAT; RETURN s; END; | DELIMITER ;
August 18, 2009
by Snippets Manager
· 13,274 Views · 5 Likes
article thumbnail
Simple Python Watchdog Timer
Easily interrupt long portions of code if they take too long to run. #!/usr/bin/python # file: watchdog.py # license: MIT License import signal class Watchdog(Exception): def __init__(self, time=5): self.time = time def __enter__(self): signal.signal(signal.SIGALRM, self.handler) signal.alarm(self.time) def __exit__(self, type, value, traceback): signal.alarm(0) def handler(self, signum, frame): raise self def __str__(self): return "The code you executed took more than %ds to complete" % self.time Example: #!/usr/bin/python # import the class from watchdog import Watchdog # don't allow long_function to take more than 5 seconds to complete try: with Watchdog(5): long_function() except Watchdog: print "long_function() took too long to complete"
August 9, 2009
by Snippets Manager
· 6,806 Views
  • Previous
  • ...
  • 826
  • 827
  • 828
  • 829
  • 830
  • 831
  • 832
  • 833
  • 834
  • 835
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: