DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Scala: val, lazy val and def
We have a variety of val, lazy val and def definitions across our code base but have been led to believe that idiomatic Scala would have us using lazy val as frequently as possible. As far as I understand so far this is what the different things do: val evaluates as soon as you initialise the object and stores the result. lazy val evaluates the first time that it’s accessed and stores the result. def executes the piece of code every time – pretty much like a Java method would. In Java, C# or Ruby I would definitely favour the 3rd option because it reduces the amount of state that an object has to hold. I’m not sure that having that state matters so much in Scala because all the default data structures we use are immutable so you can’t do any harm by having access to them. I recently read an interesting quote from Rich Hickey which seems applicable here: To the extent the data is immutable, there is little harm that can come of providing access, other than that someone could come to depend upon something that might change. Well, okay, people do that all the time in real life, and when things change, they adapt. If the data was mutable then it would be possible to change it from any other place in the class which would make it difficult to reason about the object because the data might be in an unexpected state. If we define something as a val in Scala then it’s not even possible to change the reference to that value so it doesn’t seem problematic. Perhaps I just require a bit of a mind shift to not worry so much about state if it’s immutable. It’s only been a few weeks so I’d be interested to hear the opinions of more seasoned Scala users. — I’ve read that there are various performance gains to be had from making use of lazy val or def depending on the usage of the properties but that would seem to be a premature optimisation so we haven’t been considering it so far. From http://www.markhneedham.com/blog/2011/06/22/scala-val-lazy-val-and-def/
June 23, 2011
by Mark Needham
· 11,647 Views
article thumbnail
Reading GPS Latitude and Longitude from Image and Video Files
The State of GPS Data from Mobile Devices Most of the mobile devices today support GPS geo tagging. In fact most of them come bundled with navigation software that uses GPS and therefore all the pictures and (maybe) videos can be geo tagged. But as expected different vendors come with different support and formats. iPhone OS comes with geotagging both on video and image files, while the latest Android and Symbian (the Nokia main OS for smartphones) can geo tag only images. Even more – until recently Symbian didn’t support any geotagging before the installation of an additional software – such as Location Tagger . So generally the things are quite simple: iPhone OS geotags both video and image files; Android geotags only images; Symbian geotags only images – and on some devices this is possible only after installing a software; This is in breve the state of mobile device geotagging! Why Use GPS Data? Perhaps one of the main reasons why not support geotagging especially on video files can be the usage of those geo tags. First of all what a geotag means? You may know that even Android doesn’t “geotag” videos this is not quite true. Because after using you gallery you can see where those videos are shot. This is fantastic, but actually the real information about where the video has been taken is not into the video file, but it’s in an additional log file that keeps it. Thus actually the video files doesn’t know the geo coordinates. Here comes the problem with video format, because you cannot be sure that every format supports tags that can keep geo coordinates. Actually quicktime’s MOV can store them, while Symbian’s 3GP cannot. In fact Symbian cannot store any geo information about video files! So now we’ve at least three different formats for each of those three vendors. quicktime for iPhone mpeg-4 with Android 3gp with Symbian For now I can only say that iPhone can keep the geotag into his video files. But let me return to the question – why we need this geo tags? Until the video file is on the mobile device – there’s no problem. But once you try to download it – whether on Flickr, YouTube, Picasa, etc. you’ll lose any geo information if it’s not into the file tags. And of course if the above sites can’t read it! The general reason to store this data into the file is to move it along with the file. Once you move this file from your mobile device to a web platform you’ll see where the file has been created. EXIF, Exiftool and PHP’s exif_read_data There are several tools to read geotags. For images, and here we talk only for JPEGs, this is the EXIF information. You can download the exif command line program and try to reed data with it:
June 22, 2011
by Stoimen Popov
· 15,296 Views
article thumbnail
Eclipse Indigo Release Train Now Available: 46 Million Lines of Code Across 62 Projects
For the eight successive year, the latest iteration of the Eclipse release train, Indigo, is now available for developers everywhere. And once again, the Eclipse community have shown that it is possible to coordinate software to be released on time. The scale of Indigo is huge - it contains 62 projects, 46 million lines of code contributed by 408 committers. “We are very proud to celebrate another on-time annual release train from the Eclipse community,” states Mike Milinkovich, executive director of the Eclipse Foundation. “This release has a long list of new features, especially for Java developers. Features such as Git support, Maven and Hudson integration, a great GUI builder in WindowBuilder, and our new Jubula testing tool will, I am sure, motivate developers to try Indigo.” Yesterday I listed some of the excellent tooling additions that are available in Indigo. Once again, the latest Eclipse release provides something for everyone. Download it now and find out for yourself. For Java Developers EGit 1.0 provides first-class support support for Java developers using Git for source code management WindowBuilder, a world-class Eclipse-based GUI builder, is now available as an Eclipse open source project Automated functional GUI testing for Java and HTML applications is included via Jubula m2eclipse brings tight integration with Maven and the Eclipse workspace, enabling developers to work with Maven projects directly from Eclipse Mylyn 3.6 supports Hudson build monitoring directly from the Eclipse workspace Eclipse Marketplace Client now supports drag and drop installation of Eclipse-based solutions directly into Eclipse making it significantly easier to install new solutions. New Innovation in Eclipse Modeling Xtext 2.0 has added significant new features for domain-specific languages (DSLs): 1) the ability to create DSLs with embedded Java-like expressions; 2) Xtend, a new template language that allows tightly integrated code generation into the Eclipse tooling environment; and 3) a new refactoring framework for DSLs. Acceleo 3.1 integrates code generation into Ant and Maven build chains, and includes improved generator editing facilities. CDO Model Repository 4.0 integrates with several NoSQL databases such as Objectivity/DB, MongoDB, and DB4O. Cache optimizations and many other enhancements allow for models of several gigabytes. EMF 2.7 makes it easy to replicate changes across distributed systems in an optimal way: a client can send back to the server a minimal description of what's been changed rather than sending back the whole, arbitrarily-large, new instance. Eclipse Extended Editing Framework (EEF) 1.0 generates advanced and good-looking EMF editors in one click. EMF Compare 1.2 brings dedicated UML support and is more fully integrated with the SCM. EMF Facet, a new project, allows extension of an existing Ecore metamodel without modification. EclipseRT Advancements EclipseLink 2.3 supports multi-tenant JPA Entities, making it possible to incorporate JPA persistency into SaaS-style applications. Equinox 3.7 now implements the OSGi 4.3 specification, including use of generic signatures, generic capabilities, and requirements for bundles. Eclipse Communication Framework (ECF) implements OSGi 4.2 Remote Service and Remote Service Admin standards.
June 22, 2011
by James Sugrue DZone Core CORE
· 13,292 Views
article thumbnail
Eclipse Indigo Highlights: Five Reasons to Check Out ECF
The Eclipse Communication Framework has been a steady participant in the Eclipse release trains, continuously adding to its impressive list of features. This year’s inclusion of ECF 3.5 in the Indigo release train is no exception. In this article, I'll take a look at five key features of the release: OSGi 4.2 Remote Services/RSA Standards Support ECF Indigo implements two recently-completed OSGi standards: OSGi remote services and OSGi Remote Service Admin (RSA). The OSGi Remote Services spec provides a simple, standardized way to expose OSGi services for network discovery and remote access. ECF Indigo also implements the Enterprise specification for remote services management known as Remote Services Admin (RSA). The RSA specification defines a management agent to allow for enterprise-application control of the discovery and distribution of remote services via a standardized API. Also included in the RSA specification is a standardized format for communicating meta-data about remote services, advanced handling of security, discovery and distribution event notification, and advanced handling of remote service versioning. ECF has run its implementation of RS/RSA through the OSGi Test Compatability Kit to ensure that it is compliant with the OSGi specification. Extensibility through Provider Architecture ECF has a provider architecture, that allows major components of the OSGi remote services/RSA implementation to be extended, enhanced, or replaced as needed. For example, for interoperability with existing services and applications, it’s frequently desirable to be able to substitute the wire protocol/transport to one that is already being used. With the ECF provider architecture, it’s possible to substitute the underlying protocol...and use other frameworks based upon REST, SOAP, JMS, XML-RPC, XMPP, and/or others. If you wish, you can even define and use a proprietary provider and use it to expose your remote services. Or you can use one provider for remote services development and testing, and another for deployment. Asynchronous Proxies ECF has support for remote service access via asynchronous proxies. This allows client consumers of remote services to avoid the reliability problems that are frequent when synchronous proxies are used over a relatively slow and unreliable network. The choice of whether to use synchronous or asynchronous proxies is up to the programmer, and can be made at runtime. Here is more information about this feature of ECF’s remote services implementation. XML-RPC provider ECF Indigo has an XML-RPC-based provider, which implements the remote services API. Remote Service invocation through a proxy and/or async proxy is supported too. In addition to being usable for interoperability with existing XML-RPC-based services, it can also be used as an example of how to easily use an existing framework to create a remote service provider. Google wave provider Although discontinued by Google, Wave is an open protocol with an open source implementation of the Wave server available. This means you can still build applications that take advantage of the real time shared editing functionality from within your Eclipse environment using this provider. Already, ECF provides real time shared editing using cola. This is limited to two users on a a document at a time - using the Wave provider, you could have multiple authors collaborating on the same document. Mustafa and Sebastian created a multiplayer Android phone game for EclipseCon this year, using the Wave protocol for concurrency control. Take a look at the results in the video below. ECF on Other OSGi Frameworks You're not limited to running ECF on Equinox anymore: ECF4Felix allows ECF to run on the Felix OSGi framework. So far testing has only been done on Felix. But if you are willing to help with testing ECF Remote Services/RSA on another framework, please send an email to the ecf-dev mailing list. ECF Documentation Project ECF recently started the ECF Documentation Project. This project is an approach to improve the amount and quality of the ECF documentation with the help of the committer, contributor, and consumer communities. It also aims to use of ECF for new and existing consumers. Currently this includes a Users Guide and an Integrators Guide. As a user of ECF, the documentation effort is a huge help in getting ECF to work right within your application. Great credit is due to the ECF team for this, and all other features listed here. ECF wiki: http://wiki.eclipse.org/ECF Remote services section of ECF wiki: http://wiki.eclipse.org/ECF#OSGi_Remote_Services OSGi compendium specification (Chap 13 is Remote Services): http://www.osgi.org/download/r4v42/r4.cmpn.pdf OSGi Enterprise Specification (Chap 122 is RSA): http://www.osgi.org/download/r4v42/r4.enterprise.pdf RSA wiki pages: http://wiki.eclipse.org/Remote_Services_Admin Getting Started with Remote Services: http://wiki.eclipse.org/EIG:Getting_Started_with_OSGi_Remote_Services Asynchronous Proxies (examples): http://wiki.eclipse.org/Asynchronous_Proxies_for_Remote_Services ECF Builder: https://build.ecf-project.org/jenkins/ ECF Github site (other providers, examples, Wave, and Newsreader) : https://github.com/ECF ECF4Felix: https://github.com/ECF/ECF4Felix
June 22, 2011
by James Sugrue DZone Core CORE
· 15,201 Views
article thumbnail
Java Web Application Security - Part V: Penetrating with Zed Attack Proxy
web application security is an important part of developing applications. as developers, i think we often forget this, or simply ignore it. in my career, i've learned a lot about web application security. however, i only recently learned and became familiar with the rapidly growing "appsec" industry. i found a disconnect between what appsec consultants were selling and what i was developing. it seemed like appsec consultants were selling me fear, mostly because i thought my apps were secure. so i set out on a mission to learn more about web application security and penetration testing to see if my apps really were secure. this article is part of that mission, as are the previous articles i've written in this series. java web application security - part i: java ee 6 login demo java web application security - part ii: spring security login demo java web application security - part iii: apache shiro login demo java web application security - part iv: programmatic login apis when i first decided i wanted to do a talk on webapp security, i knew it would be more interesting if i showed the audience how to hack and fix an application. that's why i wrote it into my original proposal : webapp security: develop. penetrate. protect. relax. in this session, you'll learn how to implement authentication in your java web applications using spring security, apache shiro and good ol' java ee container managed authentication. you'll also learn how to secure your rest api with oauth and lock it down with ssl. after learning how to develop authentication, i'll introduce you to owasp, the owasp top 10, its testing guide and its code review guide. from there, i'll discuss using webgoat to verify your app is secure and commercial tools like webapp firewalls and accelerators. at the time, i hadn't done much webapp pentesting . you can tell this from the fact that i mentioned webgoat as the pentesting tool. from webgoat's project page : webgoat is a deliberately insecure j2ee web application maintained by owasp designed to teach web application security lessons. in each lesson, users must demonstrate their understanding of a security issue by exploiting a real vulnerability in the webgoat application. for example, in one of the lessons the user must use sql injection to steal fake credit card numbers. the application is a realistic teaching environment, providing users with hints and code to further explain the lesson. what i really meant to say and use was zed attack proxy , also known as owasp zap. zap is a java desktop application that you setup as a proxy for your browser, then use to find vulnerabilities in your application. this article explains how you can use zap to pentest a web applications and fix its vulnerabilities. the application i'll be using in this article is the ajax login application i've been using throughout this series. i think it's great that projects like damn vulnerable web app and webgoat exist, but i wanted to test one that i think is secure, rather than one i know is not secure. in this particular example, i'll be testing the spring security implementation, since that's the framework i most often use in my open source projects. zed attack proxy tutorial download and run the application install and configure zap perform a scan fix vulnerabilities summary download and run the application to begin, download the application and expand it on your hard drive. this app is the completed version of the ajax login application referenced in java web application security - part ii: spring security login demo . you'll need java 6 and maven installed to run the app. run it using mvn jetty:run and open http://localhost:8080 in your browser. you'll see it's a simple crud application for users and you need to login to do anything. install and configure zap the zed attack proxy (zap) is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. download the latest version (i used 1.3.0) and install it on your system. after installing, launch the app and change the proxy port to 9000 (tools > options > local proxy). next, configure your browser to proxy requests through port 9000 and allow localhost requests to be proxied. i used firefox 4 (preferences > advanced > network > connection settings). when finished, your proxy settings should look like the following screenshot: another option (instead of removing localhost) is to add an entry to your hosts file with your production domain name. this is what i've done for this demo. 127.0.0.1 demo.raibledesigns.com i've also configured apache to proxy requests to jetty with the following mod_proxy settings in my httpd.conf: proxyrequests off proxypreservehost off proxypass / http://localhost:8080/ sslengine on sslproxyengine on sslcertificatefile "/etc/apache2/ssl.key/server.crt" sslcertificatekeyfile "/etc/apache2/ssl.key/server.key" proxypass / https://localhost:8443/ perform a scan now you need to give zap some data to work with. using firefox, i navigated to http://demo.raibledesigns.com and browsed around a bit, listing users, added a new one and deleted an existing one. after doing this, i noticed a number of flags in the zap ui under sites. i then right-clicked on each site (one for http and one for https) and selected attack > active scan site. you should be able to do this from the "active scan" tab at the bottom of zap, but there's a bug when the urls are the same . after doing this, i received a number of alerts, ranging from high (cross-site scripting) to low (password autocomplete). the screenshot below shows the various issues. now let's take a look at how to fix them. fix vulnerabilities one of the things not mentioned by the scan, but #1 in seven security (mis)configurations in java web.xml files , is custom error pages not configured. custom error pages are configured in this app, but error.jsp contains the following code: please check your log files for further information. stack traces can be really useful to an attacker, so it's important to start by removing the above code from src/main/webapp/error.jsp . the rest of the issues have to do with xss, autocomplete, and cookies. let's start with the easy ones. fixing autocomplete is easy enough; simply changed the html in login.jsp and userform.jsp to have autocomplete="off" as part of the tag. then modify web.xml so http-only and secure cookies are used. while you're at it, add session-timeout and tracking-mode as recommended by the aforementioned web.xml misconfigurations article. 15 true true cookie next, modify spring security's remember me configuration so it uses secure cookies. to do this, add use-secure-cookies="true" to the element in security.xml . unfortunately, spring security doesn't support httponly cookies , but will in a future release. the next issue to solve is disabling directory browsing. you can do this by copying jetty's webdefault.xml (from the org.eclipse.jetty:jetty-webapp jar) into src/test/resources and changing its "dirallowed" to false: default org.mortbay.jetty.servlet.defaultservlet acceptranges true dirallowed false you'll also need to modify the plugin's configuration to point to this file by adding it to the section in pom.xml. / src/test/resources/webdefault.xml of course, if you're running in production you'll want to configure this in your server's settings rather than in your pom.xml file. next, i set out to fix secure page browser cache issues . i had the following settings in my sitemesh decorator: however, according to zap, the first meta tag should have "no-cache" instead of "no-store", so i changed it to "no-cache". after making all these changes, i created a new zap session and ran an active scan on both sites again. below are the results: i believe the first issue (parameter tampering) is because i show the error page when a duplicate user exists. to fix this, i changed userformcontroller so it catches a userexistsexception and sends the user back to the form. try { usermanager.saveuser(user); } catch (userexistsexception uex) { result.adderror(new objecterror("user", uex.getmessage())); return "userform"; } however, this still doesn't seem to cause the alert to go away. this is likely because i'm not filtering/escaping html when it's first submitted. i believe the best solution for this would be to use something like owasp's esapi to filter parameter values. however, i was unable to find integration with spring mvc's data binding, so i decided not to try and fix this vulnerability. finally, i tried to disable jsessionid in urls using suggestions from stack overflow . the previous setting in web.xml (cookie) should do this, but it doesn't seem to work with jetty 8. the other issues (secure page browser cache, httponly cookies and secure cookies), i was unable to solve. the last two are issues caused by spring security as far as i can tell. summary in this article, i've shown you how to pentest a web application using firefox and owasp's zed attack proxy (zap). i found zap to be a nice tool for figuring out vulnerabilities, but it'd be nice if it had a "retest" feature to see if you fixed an issue for a particular url. it does have a "resend" feature, but running it didn't seem to clear alerts after i'd fixed them. the issues i wasn't able to solve seemed to be mostly related to frameworks (e.g. spring security and httponly cookies) or servers (jetty not using cookies for tracking). my suspicion is the jetty issues are because it doesn't support servlet 3 as well as it advertises. i believe this is fair; i am using a milestone release after all. i tried scanning http://demo.raibledesigns.com/ajax-login (which runs on tomcat 7 at contegix ) and confirmed that no jsessionid exists. hopefully this article has helped you understand how to figure out security vulnerabilities in your web applications. i believe zap will continue to get more popular as developers become aware of it. if you feel ambitious and want to try and solve all of the issues in my ajax login application, feel free to fork it on github . if you're interested in talking more about webapp security, please leave a comment, meet me at jazoon later this week or let's talk in july at über conf . from http://raibledesigns.com/rd/entry/java_web_application_security_part4
June 22, 2011
by Matt Raible
· 27,377 Views · 2 Likes
article thumbnail
Java EE6 Events, a lightweight alternative to JMS
A few weeks ago I attended a bejug meeting about Java EE 6, Building next generation enterprise applications. Having read much about it, I did not expect to see much shocking hidden features. But there was one part of the demo I really found impressive. Due to its loose coupling, Enterprise possibilities and simplicity. The feature I’m going to talk about today is the event mechanism that is in java EE 6. The general idea is to fire an event and let an eventlistener pick it up. I have created this example that is totally useless, but it simplicity helps me to focus on the important stuff. I’m going to fire a LogEvent from my backing action, that will log to the java.util.Logger. The first thing I need is to create a pojo that contains my log message and my LogLevel. public class LogMessage implements Serializable { private final String message; private final Level level; LogMessage(String message, Level level) { this.message = message; this.level = level; } public String getMessage() { return message; } public Level getLevel() { return level; } } easy peasy. Now that I have my data wrapper, I need something to fire the event and something to pick it up. The first thing I create is my method where I fire the event. Due to CDI I can inject an event. @Inject Event event; So we just need to fire it. event.fire(new LogMessage("Log it baby!", Level.INFO)); Now the event is fired, If no one is registerd to pick it up, it disappears into oblivion, thus we create a listener. The listeners needs a method that has one parameter, the generic type that is given to the previous event. LogMessage. public class LogListener { private static final Logger LOGGER = Logger.getAnonymousLogger(); public void process(@Observes LogMessage message){ LOGGER.log(message.getLevel(), message.getMessage()); } } The @Observes annotation listens to all events with a LogMessage. When the event is fired, this method will be triggered. This is a very nice way to create a loosely coupled application, you can separate heavy operations or encapsulate less essential operations in these event listeners. All of this all happens synchronously. When we want to replace the log statement with a slow database call to a logging table, we could make our operation heavier than it should be. What I’m looking for is to create an asynchronous call. As long as we support EJB, we can transform our Listener to an EJB by adding the @Stateless annotation on top of it. Now it’s a statless enterprise bean. This changes nothing to our sync/async problem, but EJB 3.1 support async operations. So if we also add the @Asynchronous annotation on top of it. It will asynchronously execute our logging statement. @Stateless @Asynchronous public class LogListener { private static final Logger LOGGER = Logger.getAnonymousLogger(); public void process(@Observes LogMessage message){ LOGGER.log(message.getLevel(), message.getMessage()); } } If we would want to combine the database logging and the console logging, we can just create multiple methods that listen to the same event. This is a great way to create a lightweight application with a very flexible components. The alternative solution to this problem is to use JMS, but you don’t want a heavyweight configuration for this kind of loosely coupling. Java EE has worked hard to get rid of the stigma of being heavyweight, I think they are getting there From http://styledideas.be/blog/2011/05/22/java-ee6-events-a-lightweight-alternative-to-jms/
June 22, 2011
by Jelle Victoor
· 20,401 Views · 2 Likes
article thumbnail
Git Tutorial: Comparing Files With diff
The most common scenario to use diff is to see what changes you made after your last commit. Let’s see how to do it.
June 19, 2011
by Veera Sundar
· 271,625 Views · 2 Likes
article thumbnail
Developing Android Apps with NetBeans, Maven, and VirtualBox
I am an experienced Java developer who has used various IDEs and prefer NetBeans IDE over all others by a long shot. I am also very fond of Maven as the tool to simplify and automate nearly every aspect of the development of my Java project throughout its lifecycle. Recently, I started developing Android applications and naturally I looked for a Maven plugin that would manage my Android projects. Luckily I found the maven-android-plugin which worked like a charm and allowed me to use Maven for developing my Android projects. The Android Emulator from the Android SDK seemed unusably slow. Lucklily, I found a way to use an Android Virtual Machine for VirtualBox that worked nearly as fast as my native computer! This page documents my experiences. Tested Environment Dev machine: Ubuntu 11.04 Linux IDE: NetBeans VirtualBox: 4.0.8 r71778 Android SDK Revision 11, Add on XML Schema #1, Repository XML Schema #3 (from About in SDK and AVD Manager) Android Version: 2.2 Overview of Steps Download and install the Android SDK on your dev machine Attach an Android Device to dev machine Configure and load your device for development and other use Create an initial Android maven project Connect Android Device to Android SDK Debug Android app using NetBeans Graphical Debuger Download and Install Android SDK Download and install the Android SDK on your dev machine as described here. Make sure to set the following in dev machine ~/.bashrc file: export ANDROID_HOME=$HOME/android-sdk-linux_x86 #Change as needed export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH" Attaching an Android Device to Dev Machine If you have an actual device that is usually always best. If not, you must use a virtual Android device which usually has various limitations (e.g. no GPS, Camera etc.). The Android SDK makes it easy to create a new Virtual Device but the resulting device is painfully slow in my experience and not usable. Do not bother with this. Instead, create a virtual Android device using VirtualBox as described in the following steps: Install virtual box and initial Android VM as described here: http://androidspin.com/2011/01/24/howto-install-android-x86-2-2-in-virtualbox/ http://geeknizer.com/how-to-run-google-android-in-virtualbox-vmware-on-netbooks/ Configure Android VM so it is connected bidirectionally with your dev machine over TCP as described here: http://stackoverflow.com/questions/61156/virtualbox-host-guest-network-setup I used the approach of configuring a HOST ONLY network adapater and a second NAT adapter on the Android VM within virtual box. Configuring your Android Device This section describes various things I did to setup a dev environment for my Android device: Root the device. I used Universal AndRoot Install ConnectBot so you have ssh and related network utilities Creating Initial Android Maven Application Create initial project using instructions here. I found it best to create stub project structure using the maven-archtype-plugin and the archtypes at https://github.com/akquinet/android-archetypes/wiki Connecting Android VM Device to Android SDK In order for your code to be deployed from NetBeans IDE to Android Device and in order for you to monitor your deployed app from the Dalvik Debug Monitor (ddms) you need to connect your android VM device to the android sdk over TCP as described in the following steps. On Android Device open the Terminal Emulator Type su to become root (your device must be rooted for this Type following commands in root shell: setprop service.adb.tcp.port 5555 stop adbd start adbd Type the following commands on dev machine shell. TODO: Note that IP address below is whatever is the ip address associated with the device (see ifconfig on linux for device vboxnet0) adb tcpip 5555 adb connect 192.168.0.101:5555 For details on above steps see: http://stackoverflow.com/questions/2604727/how-can-i-connect-to-android-with-adb-over-tcp Set up port forwarding as described here http://redkrieg.com/2010/10/11/adb-over-ssh-fun-with-port-forwards/ (this is where I am most fuzzy) Build your maven android project using Right-Click / Clean and Build Now for the acid test whether you can deploy your app to the device from NetBeans IDE! Right-click / Custom / Goal to show Run Maven dialog. Enter android:deploy in Goals field. Select Remember As button and enter android:deploy for its text field. If all is well, the app will deploy to the device and will show up in its "Applications" screen. Debugging Android App Using NetBeans Graphical Debugger Once you can build and deploy your app to the real or virtual Android device, here are the steps to debug the app using NetBeans debugger: On Device: Start the app (TODO: determine how to start app on device with JVM options so it can wait for debugger connection. This should be easy) On Dev Machine run Dalvik Debug Monitor (ddms) in background: $ANDROID_HOME/tools/ddms & Lookup your app in ddms and get its debug port. This is described here but does not address NetBeans specifically In NetBeans do: Debug / Attach Debugger and specify the port looked up in ddms in previous step. You may leave rest of the fields with defaults. Click OK
June 18, 2011
by Farrukh Najmi
· 172,948 Views
article thumbnail
Git Tip : Restore a deleted tag
A little tip that can be very useful, how to restore a deleted Git tag. If you juste deleted a tag by error, you can easily restore it following these steps. First, use git fsck --unreachable | grep tag then, you will see the unreachable tag. If you have several tags on the list, use git show KEY to found the good tag and finally, when you know which tag to restore, use git update-ref refs/tags/NAME KEY and the previously deleted tag with restore with NAME. Thanks to Shawn Pearce for the tip. From http://www.baptiste-wicht.com/2011/06/git-tip-restore-a-deleted-tag/
June 16, 2011
by Baptiste Wicht
· 26,571 Views
article thumbnail
Method Size Limit in Java
Most people don’t know this, but you cannot have methods of unlimited size in Java. Java has a 64k limit on the size of methods. What happens if I run into this limit? If you run into this limit, the Java compiler will complain with a message which says something like "Code too large to compile". You can also run into this limit at runtime if you had an already large method, just below the 64k limit and some tool or library does bytecode instrumentation on that method, adding to the size of the method and thus making it go beyond the 64k limit. In this case you will get a java.lang.VerifyError at runtime. This is an issue we ran into with the Chronon recorder where most large programs would have atleast a few large methods, and adding instrumentation to them would cause them to blow past the 64k limit, thus causing a runtime error in the program. Before we look into how we went about solving this problem for Chronon, lets look at under what circumstances people write such large methods in the first place. Where do these large methods come from? · Code generators As it turns out, most humans don’t infact write such gigantic methods. We found that most of these large methods were the result of some code generators, eg the ANTLR parser generator generates some very large methods. · Initialization Methods Initialization methods, especially gui initialization methods, where all the layout and attaching listeners, etc to every component in some in one large chunk of code is a common practise and results in a single large method. · Array initializers If you have a large array initialized in your code, eg: static final byte largeArray[] = {10, 20, 30, 40, 50, 60, 70, 80, …}; that is translated by the compiler into a method which uses load/store instructions to initialize the array. Thus an array too large can cause this error too, which may seem very mysterious to those who don’t know about this limit. · Long jsp pages Since most JSP compilers put all the jsp code in one method, large jsp pages can make you run into these errors too. Of course, these are only a few common cases, there can be a lot of other reasons why your method size is too large. How do we get around this issue? If you get this error at compile time, it is usually trivial to split your code into multiple methods. It may be a bit hairy when the method limit is reached due to some automated code generation like ANTLR or JSPs, but usually even these tools have provisions to allow you to split the code into chunks, eg : jsp:include in the case of JSPs. Where things get hairy is the second case I talked about earlier, which is when bytecode instrumentation causes the size of your methods to go beyond the 64k limit, which results in a runtime error. Of course you can still look at the method which is causing the issue, and go back and split it. However, this may not be possible if the method is inside a third party library. Thus, for the Chronon recorder at least, the way we fixed it was to instrument the method, and then check the method's size after instrumentation. If the size is above the 64k limit, we go back and 'deinstrument' the method, thus essentially excluding it from recording. Since both our Recorder and Time Travelling Debugger are already built from the groud up to deal with excluded code, it wasn’t an issue while recording or debugging the rest of the code. That said, the method size limit of 64k is too small and not needed in a world of 64 bit machines. I would urge everyone reading this to go vote on this JVM bug so that this issue can be resolved in some future version of the JVM. From http://eblog.chrononsystems.com/method-size-limit-in-java
June 14, 2011
by Prashant Deva
· 21,731 Views
article thumbnail
Practical PHP Refactoring: Extract Method
I'm starting a new series: Practical PHP Refactoring. Each article will cover one of the refactorings defined by Fowler in its classic book, applied to PHP code. Extract Method means creating a new method to contain part of the existing code: it's one of the most basic refactoring that you should be able to perform, just like every chef is able to chop vegetables or to turn on the gas. It's a building block to more complex refactorings. Many, many issues derive from methods that are (or have become) too long, or that confuse different concepts in the same block of code. Why should I extract a method? Extract Method is one of the simplest tool to help encapsulation. It brings a simplification of the scope, since all variables defined inside the method won't be able to pollute the calling code. The refactoring is called Extract Method since it's about object-oriented programming, but Extract Function would have the same meaning. Extract Method forces you to define a contract with a piece of code, comprehending the inputs (method parameters) and the outputs (a return value). It's not real Design by Contract, but it fits the 80% of the cases. Finally, an extracted method may be reuses independently from the calling code. Eliminating duplication is one of the driving forces that makes most of refactoring techniques interesting. Steps I follow Fowler's recipes, but I will customize it to my style of development (since I write the code samples) and to PHP peculiarities. 1. Create the method, and choose a meaningful name. Thirty seconds spent here can avoid renaming a doSomething() method in thousands of different calls in the future. A good option may be trying to call it from the point where you want to extract it from, to establish a signature handy for the client (like for TDD). But atferwards, comment the call again, since these steps must not break anything. We should go from a green state to another green state. 2. Copy the extracted code into to the new method. Scan it and fix variable references: variables existing prior to the call become the method parameters. Variables that are created in the block of code, and used afterwards, are part of the return value. Typically this is only one variable, but if there is more than one you can wrap them in an object or (temporarily) in an associative array. Local variables may now be hidden inside the method: they are not referenced outside the code and PHP will garbage-collect them when the method finishes (if they are not referenced anymore, of course). 3. Replace original code with a call to the method. 4. Perform renamings or refactorings inside the extracted method. Example The code sample shows you the various steps applied to running PHP code. I also provide a test, since I already use it to check the refactoring has gone well. We start from a method that mixes regular expressions with date formatting: getDayOfTheWeek($logLine); $this->assertEquals('On Saturday we got a visit', $day); } } class LogParser { public function getDayOfTheWeek($logLine) { preg_match('([0-9]{2}/[0-9]{2}/[0-9]{4})', $logLine, $matches); $extractedDate = $matches[0]; $date = new DateTime($extractedDate); return 'On ' . $date->format('l') . ' we got a visit'; } } We extract a method, and in the TDD style we do just as much as it takes to keep a green test. We call this method from the old code immediately: this is the bigger step. getDayOfTheWeek($logLine); $this->assertEquals('On Saturday we got a visit', $day); } } class LogParser { public function getDayOfTheWeek($logLine) { $date = $this->getDate($logLine); return 'On ' . $date->format('l') . ' we got a visit'; } function getDate($logLine) { preg_match('([0-9]{2}/[0-9]{2}/[0-9]{4})', $logLine, $matches); $extractedDate = $matches[0]; $date = new DateTime($extractedDate); return $date; } } Finally, we refine the extracted method, deciding its scope, adding a docblock and eliminating temporary, explanatory variables now rendered useless by the simplicity of this method. getDayOfTheWeek($logLine); $this->assertEquals('On Saturday we got a visit', $day); } } class LogParser { public function getDayOfTheWeek($logLine) { $date = $this->getDate($logLine); return 'On ' . $date->format('l') . ' we got a visit'; } /** * @return DateTime */ private function getDate($logLine) { preg_match('([0-9]{2}/[0-9]{2}/[0-9]{4})', $logLine, $matches); return new DateTime($matches[0]); } }
June 13, 2011
by Giorgio Sironi
· 2,039 Views
article thumbnail
Android Tutorial: How to Parse/Read JSON Data Into a Android ListView
Today we get on with our series that will connect our Android applications to internet webservices! Next up in line: from JSON to a Listview. A lot of this project is identical to the previous post in this series so try to look there first if you have any problems. On the bottom of the post ill add the Eclipse project with the source. For this example i made use of an already existing JSON webservice located here. This is a piece of the JSON array that gets returned: {"earthquakes": [ { "eqid": "c0001xgp", "magnitude": 8.8, "lng": 142.369, "src": "us", "datetime": "2011-03-11 04:46:23", "depth": 24.4, "lat": 38.322 }, { "eqid": "2007hear", "magnitude": 8.4, "lng": 101.3815, "src": "us", "datetime": "2007-09-12 09:10:26", "depth": 30, "lat": -4.5172 }<--more -->]} So how do we get this data into our application! Behold our getJSON class! getJSON(String url) public static JSONObject getJSONfromURL(String url){//initializeInputStream is = null;String result = "";JSONObject jArray = null;//http posttry{HttpClient httpclient = new DefaultHttpClient();HttpPost httppost = new HttpPost(url);HttpResponse response = httpclient.execute(httppost);HttpEntity entity = response.getEntity();is = entity.getContent();}catch(Exception e){Log.e("log_tag", "Error in http connection "+e.toString());}//convert response to stringtry{BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8);StringBuilder sb = new StringBuilder();String line = null;while ((line = reader.readLine()) != null) {sb.append(line + "\n");}is.close();result=sb.toString();}catch(Exception e){Log.e("log_tag", "Error converting result "+e.toString());}//try parse the string to a JSON objecttry{ jArray = new JSONObject(result);}catch(JSONException e){Log.e("log_tag", "Error parsing data "+e.toString());}return jArray;} The code above can be divided in 3 parts. the first part makes the HTTP call the second part converts the stream into a String the third part converts the string to a JSPNObject Now we only have to implement this into out ListView. We can use the same method as in the XML tutorial. We make a HashMap that stores our data and we put JSON values in the HashMap. After that we will bind that HashMap to a SimpleAdapter. Here is how its done: Implementation ArrayList> mylist = new ArrayList>();//Get the data (see above)JSONObject json =JSONfunctions.getJSONfromURL("http://api.geonames.org/postalCodeSearchJSON?formatted=true&postalcode=9791&maxRows=10&username=demo&style=full"); try{//Get the element that holds the earthquakes ( JSONArray )JSONArray earthquakes = json.getJSONArray("earthquakes"); //Loop the Array for(int i=0;i < earthquakes.length();i++){ HashMap map = new HashMap(); JSONObject e = earthquakes.getJSONObject(i); map.put("id", String.valueOf(i)); map.put("name", "Earthquake name:" + e.getString("eqid")); map.put("magnitude", "Magnitude: " + e.getString("magnitude")); mylist.add(map);} }catch(JSONException e) { Log.e("log_tag", "Error parsing data "+e.toString()); } After this we only need to make up the Simple Adapter ListAdapter adapter = new SimpleAdapter(this, mylist , R.layout.main, new String[] { "name", "magnitude" }, new int[] { R.id.item_title, R.id.item_subtitle }); setListAdapter(adapter); final ListView lv = getListView(); lv.setTextFilterEnabled(true); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView parent, View view, int position, long id) { @SuppressWarnings("unchecked") Toast.makeText(Main.this, "ID '" + o.get("id") + "' was clicked.", Toast.LENGTH_SHORT).show(); }); Now we have a ListView filled with JSON data! Here is the Eclipse project: source code Have fun playing around with it.
June 8, 2011
by Mark Mooibroek
· 260,031 Views
article thumbnail
When to Use Apache Camel?
When to use Apache Camel, a popular JVM/Java environment, and when to use other alternatives.
June 5, 2011
by Kai Wähner DZone Core CORE
· 152,596 Views · 12 Likes
article thumbnail
CDI AOP Tutorial: Java Standard Method Interception Tutorial - Java EE
This article discusses CDI based AOP in a tutorial format. CDI is the Java standard for dependency injection (DI) and interception (AOP). It is evident from the popularity of DI and AOP that Java needs to address DI and AOP so that it can build other standards on top of it. DI and AOP are already the foundation of many Java frameworks. CDI is a foundational aspect of Java EE 6. It is or will be shortly supported by Caucho's Resin Java Application Server, Java EE WebProfile certified, IBM's WebSphere, Oracle's Glassfish, Red Hat's JBoss and many more application servers. CDI is similar to core Spring and Guice frameworks. Like JPA did for ORM, CDI simplifies and sanitizes the API for DI and AOP. If you have worked with Spring or Guice, you will find CDI easy to use and easy to learn. If you are new to AOP, then CDI is an easy on ramp for picking up AOP quickly, as it uses a small subset of what AOP provides. CDI based AOP is simpler to use and learn. One can argue that CDI only implements a small part of AOP—method interception. While this is a small part of what AOP has to offer, it is also the part that most developers use. CDI can be used standalone and can be embedded into any application. Here is the source code for this tutorial, and instructions for use. It is no accident that this tutorial follows many of the same examples in the written three years ago. It will be interesting to compare and contrast the examples in this tutorial with the one written three years ago for Spring based AOP. Design goals of this tutorial This tutorial is meant to be a description and explanation of AOP in CDI without the clutter of EJB 3.1 or JSF. There are already plenty of tutorials that cover EJB 3.1 and JSF (and CDI). We believe that CDI has merit on its own outside of the EJB and JSF space. This tutorial only covers CDI. Repeat there is no JSF 2 or EJB 3.1 in this tutorial. There are plenty of articles and tutorials that cover using CDI as part of a larger JEE 6 application. This tutorial is not that. This tutorial series is CDI and only CDI. This tutorial only has full, complete code examples with source code you can download and try out on your own. There are no code snippets where you can't figure out where in the code you are suppose to be. So far these tutorials have been well recieved and we got a lot of feedback. There appears to be a lot of interest in the CDI standard. Thanks for reading and thanks for your comments and participation so far. AOP Basics For some, AOP seems like voodoo magic. For others, AOP seems like a cure-all. For now, let's just say that AOP is a tool that you want in your developer toolbox. It can make seemingly impossible things easy. Aagin, when we talk about AOP in CDI, we are really talking about interception which is a small but very useful part of AOP. For brevity, I am going to refer to interception as AOP. The first time that I used AOP was with Spring's transaction management support. I did not realize I was using AOP. I just knew Spring could apply EJB-style declarative transaction management to POJOs. It was probably three to six months before I realized that I was using was Spring's AOP support. The Spring framework truly brought AOP out of the esoteric closet into the main stream light of day. CDI brings these concepts into the JSR standards where other Java standards can build on top of CDI. You can think of AOP as a way to apply services (called cross-cutting concerns) to objects. AOP encompasses more than this, but this is where it gets used mostly in the main stream. I've using AOP to apply caching services, transaction management, resource management, etc. to any number of objects in an application. I am currently working with a team of folks on the CDI implementation for the revived JSR-107 JCache. AOP is not a panacea, but it certainly fits a lot of otherwise difficult use cases. You can think of AOP as a dynamic decorator design pattern. The decorator pattern allows additional behavior to be added to an existing class by wrapping the original class and duplicating its interface and then delegating to the original. See this article decorator pattern for more detail about the decorator design pattern. (Notice in addition to supporting AOP style interception CDI also supports actual decorators, which are not covered in this article.) Sample application revisited For this introduction to AOP, let's take a simple example, let's apply security services to our Automated Teller Machine example from the first the first in this series. Let's say when a user logs into a system that a SecurityToken is created that carries the user's credentials and before methods on objects get invoked, we want to check to see if the user has credentials to invoke these methods. For review, let's look at the AutomatedTellerMachine interface. Code Listing: AutomatedTellerMachine interface package org.cdi.advocacy; import java.math.BigDecimal; public interface AutomatedTellerMachine { public abstract void deposit(BigDecimal bd); public abstract void withdraw(BigDecimal bd); } In a web application, you could write a ServletFilter, that stored this SecurityToken in HttpSession and then on every request retrieved the token from Session and put it into a ThreadLocal variable where it could be accessed from a SecurityService that you could implement. Perhaps the objects that needed the SecurityService could access it as follows: Code Listing: AutomatedTellerMachineImpl implementing security without AOP public void deposit(BigDecimal bd) { /* If the user is not logged in, don't let them use this method */ if(!securityManager.isLoggedIn()){ throw new SecurityViolationException(); } /* Only proceed if the current user is allowed. */ if (!securityManager.isAllowed("AutomatedTellerMachine", operationName)){ throw new SecurityViolationException(); } ... transport.communicateWithBank(...); } In our ATM example, the above might work out well, but imagine a system with thousands of classes that needed security. Now imagine, the way we check to see if a user is "logged in" changed. If we put this code into every method that needed security, then we could possibly have to change this a thousand times if we changed the way we checked to see if a user was logged in. What we want to do instead is to use CDI to create a decorated version of the AutomateTellerMachineImpl bean. The decorated version would add the additional behavior to the AutomateTellerMachineImpl object without changing the actual implementation of the AutomateTellerMachineImpl. In AOP speak, this concept is called a cross-cutting concern. A cross-cutting concern is a concern that crosses the boundry of many objects. CDI does this by creating what is called an AOP proxy. An AOP proxy is like a dynamic decorator. Underneath the covers CDI can generate a class at runtime (the AOP proxy) that has the same interface as our AutomatedTellerMachine. The AOP proxy wraps our existing atm object and provides additional behavior by delegating to a list of method interceptors. The method interceptors provide the additional behavior and are similar to ServletFilters but for methods instead of requests. Diagrams of CDI AOP support Thus before we added CDI AOP, our atm example was like Figure 1. Figure 1: Before AOP advice After we added AOP support, we now get an AOP proxy that applies the securityAdvice to the atm as show in figure 2. Figure 2: After AOP advice You can see that the AOP proxy implements the AutomatedTellerMachine interface. When the client object looks up the atm and starts invoking methods instead of executing the methods directly, it executes the method on the proxy, which then delegates the call to a series of method interceptor called advice, which eventually invoke the actual atm instance (now called atmTarget). Let's actually look at the code for this example. For this example, we will use a simplified SecurityToken that gets stored into a ThreadLocal variable, but one could imagine one that was populated with data from a database or an LDAP server or some other source of authentication and authorization. Here is the SecurityToken, which gets stored into a ThreadLocal variable, for this example: SecurityToken.java Gets stored in ThreadLocal package org.cdi.advocacy.security; /** * @author Richard Hightower * */ public class SecurityToken { private boolean allowed; private String userName; public SecurityToken() { } public SecurityToken(boolean allowed, String userName) { super(); this.allowed = allowed; this.userName = userName; } public boolean isAllowed(String object, String methodName) { return allowed; } /** * @return Returns the allowed. */ public boolean isAllowed() { return allowed; } /** * @param allowed The allowed to set. */ public void setAllowed(boolean allowed) { this.allowed = allowed; } /** * @return Returns the userName. */ public String getUserName() { return userName; } /** * @param userName The userName to set. */ public void setUserName(String userName) { this.userName = userName; } } The SecurityService stores the SecurityToken into the ThreadLocal variable, and then delegates to it to see if the current user has access to perform the current operation on the current object as follows: SecurityService.java Service package org.cdi.advocacy.security; public class SecurityService { private static ThreadLocal currentToken = new ThreadLocal(); public static void placeSecurityToken(SecurityToken token){ currentToken.set(token); } public static void clearSecuirtyToken(){ currentToken.set(null); } public boolean isLoggedIn(){ SecurityToken token = currentToken.get(); return token!=null; } public boolean isAllowed(String object, String method){ SecurityToken token = currentToken.get(); return token.isAllowed(); } public String getCurrentUserName(){ SecurityToken token = currentToken.get(); if (token!=null){ return token.getUserName(); }else { return "Unknown"; } } } The SecurityService will throw a SecurityViolationException if a user is not allowed to access a resource. SecurityViolationException is just a simple exception for this example. SecurityViolationException.java Exception package com.arcmind.springquickstart.security; /** * @author Richard Hightower * */ public class SecurityViolationException extends RuntimeException { /** * */ private static final long serialVersionUID = 1L; } To remove the security code out of the AutomatedTellerMachineImpl class and any other class that needs security, we will write an Aspect in CDI to intercept calls and perform security checks before the method call. To do this we will create a method interceptor (known is AOP speak as an advice) and intercept method calls on the atm object. Here is the SecurityAdvice class which will intercept calls on the AutomatedTellerMachineImpl class. SecurityAdvice package org.cdi.advocacy.security; import javax.inject.Inject; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext; /** * @author Richard Hightower */ @Secure @Interceptor public class SecurityAdvice { @Inject private SecurityService securityManager; @AroundInvoke public Object checkSecurity(InvocationContext joinPoint) throws Exception { System.out.println("In SecurityAdvice"); /* If the user is not logged in, don't let them use this method */ if(!securityManager.isLoggedIn()){ throw new SecurityViolationException(); } /* Get the name of the method being invoked. */ String operationName = joinPoint.getMethod().getName(); /* Get the name of the object being invoked. */ String objectName = joinPoint.getTarget().getClass().getName(); /* * Invoke the method or next Interceptor in the list, * if the current user is allowed. */ if (!securityManager.isAllowed(objectName, operationName)){ throw new SecurityViolationException(); } return joinPoint.proceed(); } } Notice that we annotate the SecuirtyAdvice class with an @Secure annotation. The @Secure annotation is an @InterceptorBinding. We use it to denote both the interceptor and the classes it intercepts. More on this later. Notice that we use @Inject to inject the securityManager. Also we mark the method that implements that around advice with and @AroundInvoke annotation. This essentially says this is the method that does the dynamic decoration. Thus, the checkSecurity method of SecurityAdvice is the method that implements the advice. You can think of advice as the decoration that we want to apply to other objects. The objects getting the decoration are called advised objects. Notice that the SecurityService gets injected into the SecurityAdvice and the checkSecurity method uses the SecurityService* to see if the user is logged in and the user has the rights to execute the method. An instance of InvocationContext, namely joinPoint, is passed as an argument to checkSecurity. The InvocationContext has information about the method that is being called and provides control that determines if the method on the advised object's methods gets invoked (e.g., AutomatedTellerMachineImpl.withdraw and AutomatedTellerMachineImpl.deposit). If *`joinPoint.proceed()`* is not called then the wrapped method of the advised object (withdraw or deposit) is not called. (The proceed method causes the actual decorated method to be invoked or the next interceptor in the chain to get invoked.) In Spring, to apply an Advice like SecurityAdvice to an advised object, you need a pointcut. A pointcut is like a filter that picks the objects and methods that get decorated. In CDI, you just mark the class or methods of the class that you want decorated with an interceptor binding annotation. There is no complex pointcut language. You could implement one as a CDI extention, but it does not come with CDI by default. CDI uses the most common way developer apply interceptors, i.e., with annotations. CDI scans each class in each jar (and other classpath locations) that has a META-INF/beans.xml. The SecurityAdvice get installed in the CDI beans.xml. META-INF/beans.xml org.cdi.advocacy.security.SecurityAdvice You can install interceptors in the order you want them called. In order to associate a interceptor with the classes and methods it decorates, you have to define an InterceptorBinding annotation. An example of such a binding is defined below in the @Secure annotation. Secure.java annotation package org.cdi.advocacy.security; import java.lang.annotation.Retention; import java.lang.annotation.Target; import static java.lang.annotation.ElementType.*; import static java.lang.annotation.RetentionPolicy.*; import javax.interceptor.InterceptorBinding; @InterceptorBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Secure { } Notice that we annotated the @Secure annotation with the @InterceptorBinding annotation. InterceptorBindings follow a lot of the same rules as Qualifiers as discussed in the first two articles in this series. InterceptorBindings are like qaulifiers for injection in that they can have members which can further qualify the injection. You can also disable InterceptorBinding annotation members from qualifying an interception by using the @NonBinding just like you can in Qualifiers. To finish our example, we need to annotate our AutomatedTellerMachine with the same @Secure annotation; thus, associating the AutomatedTellerMachine with our SecurityAdvice. AutomatedTellerMachine class using @Secure package org.cdi.advocacy; ... import javax.inject.Inject; import org.cdi.advocacy.security.Secure; @Secure public class AutomatedTellerMachineImpl implements AutomatedTellerMachine { @Inject @Json private ATMTransport transport; public void deposit(BigDecimal bd) { System.out.println("deposit called"); transport.communicateWithBank(null); } public void withdraw(BigDecimal bd) { System.out.println("withdraw called"); transport.communicateWithBank(null); } } You have the option of use @Secure on the methods or at the class level. In this example, we annotated the class itself, which then applies the interceptor to every method. Let's complete our example by reviewing the AtmMain main method that looks up the atm out of CDI's beanContainer. Let's review AtmMain as follows: AtmMain.java package org.cdi.advocacy; import java.math.BigDecimal; import org.cdi.advocacy.security.SecurityToken; import org.cdiadvocate.beancontainer.BeanContainer; import org.cdiadvocate.beancontainer.BeanContainerManager; import org.cdi.advocacy.security.SecurityService; public class AtmMain { public static void simulateLogin() { SecurityService.placeSecurityToken(new SecurityToken(true, "Rick Hightower")); } public static void simulateNoAccess() { SecurityService.placeSecurityToken(new SecurityToken(false, "Tricky Lowtower")); } public static BeanContainer beanContainer = BeanContainerManager .getInstance(); static { beanContainer.start(); } public static void main(String[] args) throws Exception { simulateLogin(); //simulateNoAccess(); AutomatedTellerMachine atm = beanContainer .getBeanByType(AutomatedTellerMachine.class); atm.deposit(new BigDecimal("1.00")); } } Continue reading... Click on the navigation links below the author bio to read the other pages of this article. Be sure to check out part I of this series as well! Although not a fan of EJB 3, Rick is a big fan of the potential of CDI and thinks that EJB 3.1 has come a lot closer to the mark. CDI Implementations - Resin Candi - Seam Weld - Apache OpenWebBeans Before we added AOP support when we looked up the atm, we looked up the object directly as shown in figure 1, now that we applied AOP when we look up the object we get what is in figure 2. When we look up the atm in the application context, we get the AOP proxy that applies the decoration (advice, method interceptor) to the atm target by wrapping the target and delegating to it after it invokes the series of method interceptors. Victroy lap The last code listing works just like you think. If you use simulateLogin, atm.deposit does not throw a SecurityException. If you use simulateNoAccess, it does throw a SecurityException. Now let's weave in a few more "Aspects" to the mix to drive some points home and to show how interception works with multiple interceptors. I will go quicker this time. LoggingInterceptor package org.cdi.advocacy; import java.util.Arrays; import java.util.logging.Logger; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext; @Logable @Interceptor public class LoggingInterceptor { @AroundInvoke public Object log(InvocationContext ctx) throws Exception { System.out.println("In LoggingInterceptor"); Logger logger = Logger.getLogger(ctx.getTarget().getClass().getName()); logger.info("before call to " + ctx.getMethod() + " with args " + Arrays.toString(ctx.getParameters())); Object returnMe = ctx.proceed(); logger.info("after call to " + ctx.getMethod() + " returned " + returnMe); return returnMe; } } Now we need to define the Logable interceptor binding annotation as follows: package org.cdi.advocacy; import java.lang.annotation.Retention; import java.lang.annotation.Target; import static java.lang.annotation.ElementType.*; import static java.lang.annotation.RetentionPolicy.*; import javax.interceptor.InterceptorBinding; @InterceptorBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Logable { } Now to use it we just mark the methods where we want this logging. AutomatedTellerMachineImpl.java using Logable package org.cdi.advocacy; ... @Secure public class AutomatedTellerMachineImpl implements AutomatedTellerMachine { ... @Logable public void deposit(BigDecimal bd) { System.out.println("deposit called"); transport.communicateWithBank(null); } public void withdraw(BigDecimal bd) { System.out.println("withdraw called"); transport.communicateWithBank(null); } } Notice that we use the @Secure at the class level which will applies the security interceptor to every mehtod in the AutomatedTellerMachineImpl. But, we use @Logable only on the deposit method which applies it, you guessed it, only on the deposit method. Now you have to add this interceptor to the beans.xml: META-INF/beans.xml org.cdi.advocacy.LoggingInterceptor org.cdi.advocacy.security.SecurityAdvice When we run this again, we get something like this in our console output: May 15, 2011 6:46:22 PM org.cdi.advocacy.LoggingInterceptor log INFO: before call to public void org.cdi.advocacy.AutomatedTellerMachineImpl.deposit(java.math.BigDecimal) with args [1.00] May 15, 2011 6:46:22 PM org.cdi.advocacy.LoggingInterceptor log INFO: after call to public void org.cdi.advocacy.AutomatedTellerMachineImpl.deposit(java.math.BigDecimal) returned null Notice that the order of interceptors in the beans.xml file determines the order of execution in the code. (I added a println to each interceptor just to show the ordering.) When we run this, we get the following output. Output: In LoggingInterceptor In SecurityAdvice If we switch the order in the beans.xml file, we will get a different order in the console output. META-INF/beans.xml org.cdi.advocacy.security.SecurityAdvice org.cdi.advocacy.LoggingInterceptor In SecurityAdvice In LoggingInterceptor This is important as many interceptors can be applied. You have one place to set the order. Conclusion AOP is neither a cure all or voodoo magic, but a powerful tool that needs to be in your bag of tricks. The Spring framework has brought AOP to the main stream masses and Spring 2.5/3.x has simplified using AOP. CDI brings AOP and DI into the standard's bodies where it can get further mainstreamed, refined and become part of future Java standards like JCache, Java EE 6 and Java EE 7. You can use Spring CDI to apply services (called cross-cutting concerns) to objects using AOP's interception model. AOP need not seem like a foreign concept as it is merely a more flexible version of the decorator design pattern. With AOP you can add additional behavior to an existing class without writing a lot of wrapper code. This can be a real time saver when you have a use case where you need to apply a cross cutting concern to a slew of classes. To reiterate... CDI is the Java standard for dependency injection and interception (AOP). It is evident from the popularity of DI and AOP that Java needs to address DI and AOP so that it can build other standards on top of it. DI and AOP are the foundation of many Java frameworks. I hope you share my excitement of CDI as a basis for other JSRs, Java frameworks and standards. CDI is a foundational aspect of Java EE 6. It is or will be shortly supported by Caucho's Resin, IBM's WebSphere, Oracle's Glassfish, Red Hat's JBoss and many more application servers. CDI is similar to core Spring and Guice frameworks. However CDI is a general purpose framework that can be used outside of JEE 6. CDI simplifies and sanitizes the API for DI and AOP. I find that working with CDI based AOP is easier and covers the most common use cases. CDI is a rethink on how to do dependency injection and AOP (interception really). It simplifies it. It reduces it. It gets rid of legacy, outdated ideas. CDI is to Spring and Guice what JPA is to Hibernate, and Toplink. CDI will co-exist with Spring and Guice. There are plugins to make them interoperate nicely (more on these shortly). This is just a brief taste. There is more to come. Resources CDI Source CDI advocacy group CDI advocacy blog CDI advocacy google code project Google group for CDI advocacy Manisfesto version 1 Weld reference documentation CDI JSR299 Resin fast and light CDI and Java EE 6 Web Profile implementation CDI & JSF Part 1 Intro by Andy Gibson CDI & JSF Part 2 Intro by Andy Gibson CDI & JSF Part 3 Intro by Andy Gibson About the Author This article was written with CDI advocacy in mind by Rick Hightower with some collaboration from others. Rick Hightower has worked as a CTO, Director of Development and a Developer for the last 20 years. He has been involved with J2EE since its inception. He worked at an EJB container company in 1999. He has been working with Java since 1996, and writing code professionally since 1990. Rick was an early Spring enthusiast. Rick enjoys bouncing back and forth between C, Python, Groovy and Java development. Although not a fan of EJB 3, Rick is a big fan of the potential of CDI and thinks that EJB 3.1 has come a lot closer to the mark. Rick Hightower is CTO of Mammatus and is an expert on Java and Cloud Computing. There are 18 code listings in this article
May 25, 2011
by Rick Hightower
· 83,300 Views · 10 Likes
article thumbnail
Log4j Tutorial – Writing different log levels in different log files
Recently one of my blog reader Surisetty send me a question, asking me if it is possible to write log messages of different levels (info, debug, etc) into different log files? To answer his question, yes, it is possible. We can do this by extending the FileAppender class and writing our own logic. Below is the proof of concept code written to demonstrate this. Before that, you can download the Eclipse project file to run this code in your environment. Download the Source code To write different log levels in different log files Create a custom Log4j appender extending FileAppender. In that, override the append() method and check for the log level before writing a log message. Based on the level, call the setFile() method to switch between corresponding log file. Also, use MDC to store the original log file name mentioned in the log4j.properties. This is needed because setFile() changes the log file name every time you call it. So, we need to keep a track of the original file name somehow. And, we can use Log4j MDC for this. Custom Appender: LogLevelFilterFileAppender package com.veerasundar.log4j; import java.io.File; import java.io.IOException; import org.apache.log4j.FileAppender; import org.apache.log4j.Layout; import org.apache.log4j.MDC; import org.apache.log4j.spi.ErrorCode; import org.apache.log4j.spi.LoggingEvent; /** * This customized Log4j appender will seperate the log messages based on their * LEVELS and will write them' into separate files. For example, all DEBUG * messages will go to a file and all INFO messages will go to a different file. * * @author Veera Sundar | http://veerasundar.com * */ public class LogLevelFilterFileAppender extends FileAppender { private final static String DOT = "."; private final static String HIPHEN = "-"; private static final String ORIG_LOG_FILE_NAME = "OrginalLogFileName"; public LogLevelFilterFileAppender() { } public LogLevelFilterFileAppender(Layout layout, String fileName, boolean append, boolean bufferedIO, int bufferSize) throws IOException { super(layout, fileName, append, bufferedIO, bufferSize); } public LogLevelFilterFileAppender(Layout layout, String fileName, boolean append) throws IOException { super(layout, fileName, append); } public LogLevelFilterFileAppender(Layout layout, String fileName) throws IOException { super(layout, fileName); } @Override public void activateOptions() { MDC.put(ORIG_LOG_FILE_NAME, fileName); super.activateOptions(); } @Override public void append(LoggingEvent event) { try { setFile(appendLevelToFileName((String) MDC.get(ORIG_LOG_FILE_NAME), event.getLevel().toString()), fileAppend, bufferedIO, bufferSize); } catch (IOException ie) { errorHandler .error( "Error occured while setting file for the log level " + event.getLevel(), ie, ErrorCode.FILE_OPEN_FAILURE); } super.append(event); } private String appendLevelToFileName(String oldLogFileName, String level) { if (oldLogFileName != null) { final File logFile = new File(oldLogFileName); String newFileName = ""; final String fn = logFile.getName(); final int dotIndex = fn.indexOf(DOT); if (dotIndex != -1) { // the file name has an extension. so, insert the level // between the file name and the extension newFileName = fn.substring(0, dotIndex) + HIPHEN + level + DOT + fn.substring(dotIndex + 1); } else { // the file name has no extension. So, just append the level // at the end. newFileName = fn + HIPHEN + level; } return logFile.getParent() + File.separator + newFileName; } return null; } } log4j.properties file log4j.rootLogger = DEBUG, fileout log4j.appender.fileout = com.veerasundar.log4j.LogLevelFilterFileAppender log4j.appender.fileout.layout.ConversionPattern = %d{ABSOLUTE} %5p %c - %m%n log4j.appender.fileout.layout = org.apache.log4j.PatternLayout log4j.appender.fileout.File = C:/vraa/temp/logs.log Lets test our code package com.veerasundar.log4j; import org.apache.log4j.Logger; public class Log4jDemo { private static final Logger logger = Logger.getLogger(Log4jDemo.class); public static void main(String args[]) { logger.debug("This is a debug message"); logger.info("This is a information message"); logger.warn("This is a warning message"); logger.error("This is an error message"); logger.fatal("This is a fatal message"); logger.debug("This is another debug message"); logger.info("This is another information message"); logger.warn("This is another warning message"); logger.error("This is another error message"); logger.fatal("This is another fatal message"); } } Download the Source code From http://veerasundar.com/blog/2011/05/log4j-tutorial-writing-different-log-levels-in-different-log-files/
May 19, 2011
by Veera Sundar
· 51,240 Views
article thumbnail
Git backups, and no, it's not just about pushing
Git is a backup system itself: for example, you can version your .txt folders containing TODO lists. Since Git version your files just like it does for code, after accidental deletion or modifications it will be able to bring you back. Yet, if you do not regularly push your commits, a problem with the drive containing the repository may cause the loss of all your work. You can put the repository in Dropbox or on a similar service, but I don't trust it. Dropbox syncs files in .git independently from the rest and from one another, and it may break temporarily or for good the repository. By the way, I only want to snapshot a backup at specific points in time, not always occupying my connection by instant mirroring. A note before beginning: with binary data Git is not proficient as a backup tool: text works a lot better (it's like code). This article is dedicated to the backup of code and textual content. Push is not a backup For example, because it may lack branches. In general, pushing to origin is not even an option as you may not want to push your changes yet, but still perform a backup. It's only in the open source world that backup corresponds to publishing online. However, thanks to decentralization there are some simple solutions, involving the creation of repositorite different from origin: git clone /path/to/working/copy #creates the backup git pull #origin master of course, updates the backup # you can specify better branches via the local configuration of the backup copy (git config) The inverse solution, involging pulling, is also possible: git init . #in the folder of your backup, or you can use a remote repository git remote add backup_repo /path/to/backup/repo #or a git:// repo git push backup #master usually, but also multiple branches git push --all backup #an alternative that pushes all branches All the commands, also the one that will follow, are just bash commands: it's easy to create a script and automate its execution with cron, anacron or whatever you want. The Force"del" Unix is powerful in you. git bundle git bundle is another command that may be used for backup purposes. It will create a single file containing all the refs you need to export from your local repository. It's often used for publishing commits via USB keys or other media in absence of a connection. For one branch, it's simple. This command will create a myrepo.bundle file. git bundle create myrepo.bundle master For more branches or tags, it's still simple: git bundle create myrepo.bundle master other_branch Restoring the content of the bundle is a single comment. Inside an empty repo, type: git bundle unbundle myrepo.bundle Instead if you do not have a repo, and just want to recreate the old one: git clone myrepo.bundle -b master myrepo_folder In emergency situations, bundle comes handy. But my issue with that command is that I always forget something when I use it: for example in my tutorial repository I had a lot of tags, but bundle did not include them by default (you have to specify the whole references list like for master other_branch.) Tarballs An alternative is just to archive the repository in a tar.gz or tar.bz file. tar -cf repository.tar repository/ gzip repository.tar # or bzip2 repository.tar After that, you can use scp or even rsync (but I don't think it will speed up much) to put repository.tar.gz on another medium. The weight is higher in this case, since the repository contains also the checked out working copy. But you don't have to learn new commands: apart from the weight and the lack of incremental updates, this solution works fine. Bare repositories You can use git clone --bare repository/ backup_folder/ to create a bare copy of the repository, as a backup. The bare repository does not maintain a checked out working tree, and as so saves space and time for its transferral. This method can be used in conjunction with the pull/push or the tarball method. For restoring the backup: git clone backup_folder/ new_repository/ will recreate the original situation in new_repository. In any of the cases the new folders are created automatically. I won't advise to just copy the folder as often on other backup filesystems (like an USB key's vfat) permissions, owner and other metadata are lost. Conclusion So now you have some alternatives for backing up your repositories or transporting them without setting up a server like Gitosis or passing from the publicly available Github. In fact, I researched this techniques for transporting my tutorial code to phpDay 2011 and the Dutch PHP Conference, and they have worked pretty well.
May 18, 2011
by Giorgio Sironi
· 72,355 Views · 1 Like
article thumbnail
How I Resolved SpringMVC+Hibernate Error: No Hibernate Session Bound to Thread
I used the SpringMVC @Controller annotation approach and configured the Web related Spring configuration in dispatcher-servlet.xml.
May 18, 2011
by Siva Prasad Reddy Katamreddy
· 69,222 Views · 2 Likes
article thumbnail
How to Iterate ArrayList in Struts2
We will discuss how to iterate over a collection of String objects in Struts2 tag libraries and then a List of custom class objects. It looks as if iterating a list of string objects is easier than iterating over a list of custom class objects in Struts 2. But the reality is that iterating a list of custom class objects is also equally easier. By custom class we mean the User, Employee, Department, Products, Vehicles classes that are created in any web application. Download Working Sample Here Usually it happens that one needs to fetch a list of records from database/files and then display it in the JSP. The module requiring this functionality could be Search, Listing users/departments/products etc. The basic flow of struts2 web application goes like: The user initiates the request from one page. This request is received by the interceptor which further invokes the Struts2 action. The action class fetches the records and stores in a list. This list is available to the next JSP using the public getter method. Please note that the public getter method for the List is mandatory. Once the List has been populated by Struts2 action class, the JSP then iterates over this List and displays the corresponding information. In the days gone by, one would store the List as a session attribute and then access the list in JSP using the scriptlets to display appropriate output to the users. Here is a Struts2 sample application to iterate one String and one custom class objects List. Though we are using the Struts2 tag library to iterate the list but JSTL can also be used for iteration. Also if you are going to use the code examples given below, use the following URL's to access the application: http://localhost:8080//index.action Iterate a Custom class ArrayList in Struts2 web.xml struts2 org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter struts2 *.action struts.xml /home.jsp /success.jsp /failure.jsp home.jsp Enter a user name to get the documents uploaded by that user. Username success.jsp Documents uploaded by the user are: failure.jsp FileAction.java package com.example; import java.util.ArrayList; import java.util.List; public class FetchAction { private String username; private String message; private List documents = new ArrayList(); public List getDocuments() { return documents; } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String execute() { if( username != null) { //logic to fetch the document list (say from database) Document d1 = new Document(); d1.setName("user.doc"); Document d2 = new Document(); d2.setName("office.doc"); Document d3 = new Document(); d3.setName("transactions.doc"); documents.add(d1); documents.add(d2); documents.add(d3); return "success"; } else { message="Unable to fetch"; return "failure"; } } } Document.java package com.example; public class Document { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } } Iterate String List in Struts2 The way to iterate the a String list is similar with the only difference that the action class FetchAction.java now populates the name of documents into an ArrayList of String objects. The code zip file containing the iteration over an ArrayList of custom class object or bean can be downloaded at: http://www.fileserve.com/file/QmrsJ7k The URL to access this application will be: http://localhost:8080/IteratorExample/index.action The code zip file containing the iteration over an ArrayList of string class object or bean can be downloaded at: http://www.fileserve.com/file/V2kXkfx The URL to access this application will be: http://localhost:8080/StringIteratorExample/index.action From http://extreme-java.blogspot.com/2011/05/how-to-iterate-arraylist-in-struts2.html
May 17, 2011
by Sandeep Bhandari
· 70,755 Views
article thumbnail
Spring Expression Language (SpEL) Predefined Variables
Spring 3.0 introduced Spring Expression Language (SpEL). There are two variables that you have available “systemProperties” and “systemEnvironment“. SpEL allows us to access information from our beans and system beans information at runtime (late binding). These can be applied to bean fields as defaults using the JSR-303 @Value annotation on a field or in XML with the options. systemProperties – a java.util.Properties object retrieving properties from the runtime environment systemEnvironment – a java.util.Properties object retrieving environment specific properties from the runtime environment We can access specific elements from the Properties object with the syntax: systemProperties['property.name'] systemEnvironment['property.name'] public class MyEnvironment { @Value("#{ systemProperties['user.language'] }") private String varOne; @Value("#{ systemProperties }") private java.util.Properties systemProperties; @Value("#{ systemEnvironment }") private java.util.Properties systemEnvironment; @Override public String toString() { return "\n\n********************** MyEnvironment: [\n\tvarOne=" + varOne + ", \n\tsystemProperties=" + systemProperties + ", \n\tsystemEnvironment=" + systemEnvironment + "]"; } } Register the “MyEnvironment” bean in your Spring context and create a JUnit test to display the variables. From http://gordondickens.com/wordpress/2011/05/12/spring-expression-language-spel-predefined-variables/
May 16, 2011
by Gordon Dickens
· 23,659 Views
article thumbnail
Lucene's indexing is fast!
Wikipedia periodically exports all of the content on their site, providing a nice corpus for performance testing. I downloaded their most recent English XML export: it uncompresses to a healthy 21 GB of plain text! Then I fully indexed this with Lucene's current trunk (to be 4.0): it took 13 minutes and 9 seconds, or 95.8 GB/hour -- not bad! Here are the details: I first pre-process the XML file into a single-line file, whereby each doc's title, date, and body are written to a single line, and then index from this file, so that I measure "pure" indexing cost. Note that a real app would likely have a higher document creation cost here, perhaps having to pull documents from a remote database or from separate files, run filters to extract text from PDFs or MS Office docs, etc. I use Lucene's contrib/benchmark package to do the indexing; here's the alg I used: analyzer=org.apache.lucene.analysis.standard.StandardAnalyzer content.source = org.apache.lucene.benchmark.byTask.feeds.LineDocSource docs.file = /lucene/enwiki-20100904-pages-articles.txt doc.stored = true doc.term.vector = false doc.tokenized = false doc.body.stored = false doc.body.tokenized = true log.step.AddDoc=10000 directory=FSDirectory compound=false ram.flush.mb = 256 work.dir=/lucene/indices/enwiki content.source.forever = false CreateIndex { "BuildIndex" [ { "AddDocs" AddDoc > : * ] : 6 - CloseIndex } RepSumByPrefRound BuildIndex There is no field truncation taking place, since this is now disabled by default -- every token in every Wikipedia article is being indexed. I tokenize the body field, and don't store it, and don't tokenize the title and date fields, but do store them. I use StandardAnalyzer, and I include the time to close the index, which means IndexWriter waits for any running background merges to complete. The index only has 4 fields -- title, date, body, and docid. I've done a few things to speed up the indexing: Increase IndexWriter's RAM buffer from the default 16 MB to 256 MB Run with 6 threads Disable compound file format Reuse document/field instances (contrib/benchmark does this by default) Lucene's wiki describes additional steps you can take to speed up indexing. Both the source lines file and index are on an Intel X25-M SSD, and I'm running it on a modern machine, with dual Xeon X5680s, overclocked to 4.0 Ghz, with 12 GB RAM, running Fedora Linux. Java is 64bit 1.6.0_21-b06, and I run as java -server -Xmx2g -Xms2g. I could certainly give it more RAM, but it's not really needed. The resulting index is 6.9 GB. Out of curiosity, I made a small change to contrib/benchmark, to print the ingest rate over time. It looks like this (over a 100-second window): Note that a large part (slightly over half!) of the time, the ingest rate is 0; this is not good! This happens because the flushing process, which writes a new segment when the RAM buffer is full, is single-threaded, and, blocks all indexing while it's running. This is a known issue, and is actively being addressed under LUCENE-2324. Flushing is CPU intensive -- the decode and reencode of the great many vInts is costly. Computers usually have big write caches these days, so the IO shouldn't be a bottleneck. With LUCENE-2324, each indexing thread state will flush its own segment, privately, which will allow us to make full use of CPU concurrency, IO concurrency as well as concurrency across CPUs and the IO system. Once this is fixed, Lucene should be able to make full use of the hardware, ie fully saturate either concurrent CPU or concurrent IO such that whichever is the bottleneck in your context gates your ingest rate. Then maybe we can double this already fast ingest rate!
May 15, 2011
by Michael Mccandless
· 15,166 Views
  • Previous
  • ...
  • 813
  • 814
  • 815
  • 816
  • 817
  • 818
  • 819
  • 820
  • 821
  • 822
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: