DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Lucene's indexing is fast!
Wikipedia periodically exports all of the content on their site, providing a nice corpus for performance testing. I downloaded their most recent English XML export: it uncompresses to a healthy 21 GB of plain text! Then I fully indexed this with Lucene's current trunk (to be 4.0): it took 13 minutes and 9 seconds, or 95.8 GB/hour -- not bad! Here are the details: I first pre-process the XML file into a single-line file, whereby each doc's title, date, and body are written to a single line, and then index from this file, so that I measure "pure" indexing cost. Note that a real app would likely have a higher document creation cost here, perhaps having to pull documents from a remote database or from separate files, run filters to extract text from PDFs or MS Office docs, etc. I use Lucene's contrib/benchmark package to do the indexing; here's the alg I used: analyzer=org.apache.lucene.analysis.standard.StandardAnalyzer content.source = org.apache.lucene.benchmark.byTask.feeds.LineDocSource docs.file = /lucene/enwiki-20100904-pages-articles.txt doc.stored = true doc.term.vector = false doc.tokenized = false doc.body.stored = false doc.body.tokenized = true log.step.AddDoc=10000 directory=FSDirectory compound=false ram.flush.mb = 256 work.dir=/lucene/indices/enwiki content.source.forever = false CreateIndex { "BuildIndex" [ { "AddDocs" AddDoc > : * ] : 6 - CloseIndex } RepSumByPrefRound BuildIndex There is no field truncation taking place, since this is now disabled by default -- every token in every Wikipedia article is being indexed. I tokenize the body field, and don't store it, and don't tokenize the title and date fields, but do store them. I use StandardAnalyzer, and I include the time to close the index, which means IndexWriter waits for any running background merges to complete. The index only has 4 fields -- title, date, body, and docid. I've done a few things to speed up the indexing: Increase IndexWriter's RAM buffer from the default 16 MB to 256 MB Run with 6 threads Disable compound file format Reuse document/field instances (contrib/benchmark does this by default) Lucene's wiki describes additional steps you can take to speed up indexing. Both the source lines file and index are on an Intel X25-M SSD, and I'm running it on a modern machine, with dual Xeon X5680s, overclocked to 4.0 Ghz, with 12 GB RAM, running Fedora Linux. Java is 64bit 1.6.0_21-b06, and I run as java -server -Xmx2g -Xms2g. I could certainly give it more RAM, but it's not really needed. The resulting index is 6.9 GB. Out of curiosity, I made a small change to contrib/benchmark, to print the ingest rate over time. It looks like this (over a 100-second window): Note that a large part (slightly over half!) of the time, the ingest rate is 0; this is not good! This happens because the flushing process, which writes a new segment when the RAM buffer is full, is single-threaded, and, blocks all indexing while it's running. This is a known issue, and is actively being addressed under LUCENE-2324. Flushing is CPU intensive -- the decode and reencode of the great many vInts is costly. Computers usually have big write caches these days, so the IO shouldn't be a bottleneck. With LUCENE-2324, each indexing thread state will flush its own segment, privately, which will allow us to make full use of CPU concurrency, IO concurrency as well as concurrency across CPUs and the IO system. Once this is fixed, Lucene should be able to make full use of the hardware, ie fully saturate either concurrent CPU or concurrent IO such that whichever is the bottleneck in your context gates your ingest rate. Then maybe we can double this already fast ingest rate!
May 15, 2011
by Michael Mccandless
· 15,166 Views
article thumbnail
Java Web Application Security - Part I: Java EE 6 Login Demo
Back in February, I wrote about my upcoming conferences: In addition to Vegas and Poland, there's a couple other events I might speak at in the next few months: the Utah Java Users Group (possibly in April), Jazoon and ÜberConf (if my proposals are accepted). For these events, I'm hoping to present the following talk: Webapp Security: Develop. Penetrate. Protect. Relax. In this session, you'll learn how to implement authentication in your Java web applications using Spring Security, Apache Shiro and good ol' Java EE Container Managed Authentication. You'll also learn how to secure your REST API with OAuth and lock it down with SSL. After learning how to develop authentication, I'll introduce you to OWASP, the OWASP Top 10, its Testing Guide and its Code Review Guide. From there, I'll discuss using WebGoat to verify your app is secure and commercial tools like webapp firewalls and accelerators. Fast forward a couple months and I'm happy to say that I've completed my talk at the Utah JUG and it's been accepted at Jazoon and Über Conf. For this talk, I created a presentation that primarily consists of demos implementing basic, form and Ajax authentication using Java EE 6, Spring Security and Apache Shiro. In the process of creating the demos, I learned (or re-educated myself) how to do a number of things in all 3 frameworks: Implement Basic Authentication Implement Form-based Authentication Implement Ajax HTTP -> HTTPS Authentication (with programmatic APIs) Force SSL for certain URLs Implement a file-based store of users and passwords (in Jetty/Maven and Tomcat standalone) Implement a database store of users and passwords (in Jetty/Maven and Tomcat standalone) Encrypt Passwords Secure methods with annotations For the demos, I showed the audience how to do almost all of these, but skipped Tomcat standalone and securing methods in the interest of time. In July, when I do this talk at ÜberConf, I plan on adding 1) hacking the app (to show security holes) and 2) fixing it to protect it against vulnerabilities. I told the audience at UJUG that I would post the presentation and was planning on recording screencasts of the various demos so the online version of the presentation would make more sense. Today, I've finished the first screencast showing how to implement security with Java EE 6. Below is the presentation (with the screencast embedded on slide 10) as well as a step-by-step tutorial. * You can also watch the screencast on YouTube or download the presentation PDF. Java EE 6 Login Tutorial Download and Run the Application Implement Basic Authentication Implement Form-based Authentication Force SSL Store Users in a Database Summary Download and Run the Application To begin, download the application you'll be implementing security in. This app is a stripped-down version of the Ajax Login application I wrote for my article on Implementing Ajax Authentication using jQuery, Spring Security and HTTPS. You'll need Java 6 and Maven installed to run the app. Run it using mvn jetty:run and open http://localhost:8080 in your browser. You'll see it's a simple CRUD application for users and there's no login required to add or delete users. Implement Basic Authentication The first step is to protect the list screen so people have to login to view users. To do this, add the following to the bottom of src/main/webapp/WEB-INF/web.xml: users /users GET POST ROLE_ADMIN BASIC Java EE Login ROLE_ADMIN At this point, if you restart Jetty (Ctrl+C and jetty:run again), you'll get an error about a missing LoginService. This happens because Jetty doesn't know where the "Java EE Login" realm is located. Add the following to pom.xml, just after in the Jetty plugin's configuration. Java EE Login ${basedir}/src/test/resources/realm.properties The realm.properties file already exists in the project and contains user names and passwords. Start the app again using mvn jetty:run and you should be prompted to login when you click on the "Users" tab. Enter admin/admin to login. After logging in, you can try to logout by clicking the "Logout" link in the top-right corner. This calls a LogoutController with the following code that logs the user out. public void logout(HttpServletResponse response) throws ServletException, IOException { request.getSession().invalidate(); response.sendRedirect(request.getContextPath()); } You'll notice that clicking this link doesn't log you out, even though the session is invalidated. The only way to logout with basic authentication is to close the browser. In order to get the ability to logout, as well as to have more control over the look-and-feel of the login, you can implement form-based authentication. Implement Form-based Authentication To change from basic to form-based authentication, you simply have to replace the in your web.xml with the following: FORM /login.jsp /login.jsp?error=true The login.jsp page already exists in the project, in the src/main/webapp directory. This JSP has 3 important elements: 1) a form that submits to "${contextPath}/j_security_check", 2) an input element named "j_username" and 3) an input element named "j_password". If you restart Jetty, you'll now be prompted to login with this JSP instead of the basic authentication dialog. Force SSL Another thing you might want to implement to secure your application is forcing SSL for certain URLs. To do this on the same you already have in web.xml, add the following after : CONFIDENTIAL To configure Jetty to listen on an SSL port, add the following just after in your pom.xml: true 8080 true 8443 60000 ${project.build.directory}/ssl.keystore appfuse appfuse The keystore must be generated for Jetty to start successfully, so add the keytool-maven-plugin just above the jetty-maven-plugin in pom.xml. org.codehaus.mojo keytool-maven-plugin 1.0 generate-resources clean clean generate-resources genkey genkey ${project.build.directory}/ssl.keystore cn=localhost appfuse appfuse appfuse RSA Now if you restart Jetty, go to http://localhost:8080 and click on the "Users" tab, you'll get a 403. What the heck?! When this first happened to me, it took me a while to figure out. It turns out that Jetty doesn't redirect to HTTPS when using Java EE authentication, so you have to manually type in https://localhost:8443/ (or add a filter to redirect for you). If you deployed this same application on Tomcat (after enabling SSL), it would redirect for you. Store Users in a Database Finally, to store your users in a database instead of file, you'll need to change the in the Jetty plugin's configuration. Replace the existing element with the following: Java EE Login ${basedir}/src/test/resources/jdbc-realm.properties The jdbc-realm.properties file already exists in the project and contains the database settings and table/column names for the user and role information. jdbcdriver = com.mysql.jdbc.Driver url = jdbc:mysql://localhost/appfuse username = root password = usertable = app_user usertablekey = id usertableuserfield = username usertablepasswordfield = password roletable = role roletablekey = id roletablerolefield = name userroletable = user_role userroletableuserkey = user_id userroletablerolekey = role_id cachetime = 300 Of course, you'll need to install MySQL for this to work. After installing it, you should be able to create an "appfuse" database and populate it using the following commands: mysql -u root -p -e 'create database appfuse' curl https://gist.github.com/raw/958091/ceecb4a6ae31c31429d5639d0d1e6bfd93e2ea42/create-appfuse.sql > create-appfuse.sql mysql -u root -p appfuse < create-appfuse.sql Next you'll need to configure Jetty so it has MySQL's JDBC Driver in its classpath. To do this, add the following dependency just after the element (before ) in pom.xml: mysql mysql-connector-java 5.1.14 Now run the jetty-password.sh file in the root directory of the project to generate a password of your choosing. For example: $ sh jetty-password.sh javaeelogin javaeelogin OBF:1vuj1t2v1wum1u9d1ugo1t331uh21ua51wts1t3b1vur MD5:53b176e6ce1b5183bc970ef1ebaffd44 The last two lines are obfuscated and MD5 versions of the password. Update the admin user's password to this new value. You can do this with the following SQL statement. UPDATE app_user SET password='MD5:53b176e6ce1b5183bc970ef1ebaffd44' WHERE username = 'admin'; Now if you restart Jetty, you should be able to login with admin/javaeelogin and view the list of users. Summary In this tutorial, you learned how to implement authentication using standard Java EE 6. In addition to the basic XML configuration, there's also some new methods in HttpServletRequest for Java EE 6 and Servlet 3.0: authenticate(response) login(user, pass) logout() This tutorial doesn't show you how to use them, but I did play with them a bit as part of my UJUG demo when implementing Ajax authentication. I found that login() did work, but it didn't persist the authentication for the users session. I also found that after calling logout(), I still needed to invalidate the session to completely logout the user. There are some additional limitations I found with Java EE authentication, namely: No error messages for failed logins No Remember Me No auto-redirect from HTTP to HTTPS Container has to be configured Doesn’t support regular expressions for URLs Of course, no error messages indicating why login failed is probably a good thing (you don't want to tell users why their credentials failed). However, when you're trying to figure out if your container is configured properly, the lack of container logging can be a pain. In the next couple weeks, I'll post Part II of this series, where I'll show you how to implement this same set of features using Spring Security. In the meantime, please let me know if you have any questions. From http://raibledesigns.com/rd/entry/java_web_application_security_part
May 15, 2011
by Matt Raible
· 41,459 Views · 3 Likes
article thumbnail
Real time monitoring PHP applications with websockets and node.js
The inspection of the error logs is a common way to detect errors and bugs. We also can show errors on-screen within our developement server, or we even can use great tools like firePHP to show our PHP errors and warnings inside our firebug console. That’s cool, but we only can see our session errors/warnings. If we want to see another’s errors we need to inspect the error log. tail -f is our friend, but we need to surf against all the warnings of all sessions to see our desired ones. Because of that I want to build a tool to monitor my PHP applications in real-time. Let’s start: What’s the idea? The idea is catch all PHP’s errors and warnings at run time and send them to a node.js HTTP server. This server will work similar than a chat server but our clients will only be able to read the server’s logs. Basically the applications have three parts: the node.js server, the web client (html5) and the server part (PHP). Let me explain a bit each part: The node Server Basically it has two parts: a http server to handle the PHP errors/warnings and a websocket server to manage the realtime communications with the browser. When I say that I’m using websockets that’s means the web client will only work with a browser with websocket support like chrome. Anyway it’s pretty straightforward swap from a websocket sever to a socket.io server to use it with every browser. But websockets seems to be the future, so I will use websockets in this example. The http server: http.createServer(function (req, res) { var remoteAdrress = req.socket.remoteAddress; if (allowedIP.indexOf(remoteAdrress) >= 0) { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Ok\n'); try { var parsedUrl = url.parse(req.url, true); var type = parsedUrl.query.type; var logString = parsedUrl.query.logString; var ip = eval(parsedUrl.query.logString)[0]; if (inspectingUrl == "" || inspectingUrl == ip) { clients.forEach(function(client) { client.write(logString); }); } } catch(err) { console.log("500 to " + remoteAdrress); res.writeHead(500, { 'Content-Type': 'text/plain' }); res.end('System Error\n'); } } else { console.log("401 to " + remoteAdrress); res.writeHead(401, { 'Content-Type': 'text/plain' }); res.end('Not Authorized\n'); } }).listen(httpConf.port, httpConf.host); and the web socket server: var inspectingUrl = undefined; ws.createServer(function(websocket) { websocket.on('connect', function(resource) { var parsedUrl = url.parse(resource, true); inspectingUrl = parsedUrl.query.ip; clients.push(websocket); }); websocket.on('close', function() { var pos = clients.indexOf(websocket); if (pos >= 0) { clients.splice(pos, 1); } }); }).listen(wsConf.port, wsConf.host); If you want to know more about node.js and see more examples, have a look to the great site: http://nodetuts.com/. In this site Pedro Teixeira will show examples and node.js tutorials. In fact my node.js http + websoket server is a mix of two tutorials from this site. The web client. The web client is a simple websockets application. We will handle the websockets connection, reconnect if it dies and a bit more. I’s based on node.js chat demo Real time monitor Socket status: Conecting ...IP: [all]" ?>count: 0 And the javascript magic var timeout = 5000; var wsServer = '192.168.2.2:8880'; var unread = 0; var focus = false; var count = 0; function updateCount() { count++; $("#count").text(count); } function cleanString(string) { return string.replace(/&/g,"&").replace(//g,">"); } function updateUptime () { var now = new Date(); $("#uptime").text(now.toRelativeTime()); } function updateTitle(){ if (unread) { document.title = "(" + unread.toString() + ") Real time " + selectedIp + " monitor"; } else { document.title = "Real time " + selectedIp + " monitor"; } } function pad(n) { return ("0" + n).slice(-2); } function startWs(ip) { try { ws = new WebSocket("ws://" + wsServer + "?ip=" + ip); $('#toolbar').css('background', '#65A33F'); $('#socketStatus').html('Connected to ' + wsServer); //console.log("startWs:" + ip); //listen for browser events so we know to update the document title $(window).bind("blur", function() { focus = false; updateTitle(); }); $(window).bind("focus", function() { focus = true; unread = 0; updateTitle(); }); } catch (err) { //console.log(err); setTimeout(startWs, timeout); } ws.onmessage = function(event) { unread++; updateTitle(); var now = new Date(); var hh = pad(now.getHours()); var mm = pad(now.getMinutes()); var ss = pad(now.getSeconds()); var timeMark = '[' + hh + ':' + mm + ':' + ss + '] '; logString = eval(event.data); var host = logString[0]; var line = "" + timeMark + "" + host + ""; line += "" + logString[1]; + ""; if (logString[2]) { line += " " + logString[2] + ""; } $('#log').append(line); updateCount(); window.scrollBy(0, 100000000000000000); }; ws.onclose = function(){ //console.log("ws.onclose"); $('#toolbar').css('background', '#933'); $('#socketStatus').html('Disconected'); setTimeout(function() {startWs(selectedIp)}, timeout); } } $(document).ready(function() { startWs(selectedIp); }); The server part: The server part will handle silently all PHP warnings and errors and it will send them to the node server. The idea is to place a minimal PHP line of code at the beginning of the application that we want to monitor. Imagine the following piece of PHP code $a = $var[1]; $a = 1/0; class Dummy { static function err() { throw new Exception("error"); } } Dummy1::err(); it will throw: A notice: Undefined variable: var A warning: Division by zero An Uncaught exception ‘Exception’ with message ‘error’ So we will add our small library to catch those errors and send them to the node server include('client/NodeLog.php'); NodeLog::init('192.168.2.2'); $a = $var[1]; $a = 1/0; class Dummy { static function err() { throw new Exception("error"); } } Dummy1::err(); The script will work in the same way than the fist version but if we start our node.js server in a console: $ node server.js HTTP server started at 192.168.2.2::5672 Web Socket server started at 192.168.2.2::8880 We will see those errors/warnings in real-time when we start our browser Here we can see a small screencast with the working application: This is the server side library: class NodeLog { const NODE_DEF_HOST = '127.0.0.1'; const NODE_DEF_PORT = 5672; private $_host; private $_port; /** * @param String $host * @param Integer $port * @return NodeLog */ static function connect($host = null, $port = null) { return new self(is_null($host) ? self::$_defHost : $host, is_null($port) ? self::$_defPort : $port); } function __construct($host, $port) { $this->_host = $host; $this->_port = $port; } /** * @param String $log * @return Array array($status, $response) */ public function log($log) { list($status, $response) = $this->send(json_encode($log)); return array($status, $response); } private function send($log) { $url = "http://{$this->_host}:{$this->_port}?logString=" . urlencode($log); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $response = curl_exec($ch); $status = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); return array($status, $response); } static function getip() { $realip = '0.0.0.0'; if ($_SERVER) { if ( isset($_SERVER['HTTP_X_FORWARDED_FOR']) && $_SERVER['HTTP_X_FORWARDED_FOR'] ) { $realip = $_SERVER["HTTP_X_FORWARDED_FOR"]; } elseif ( isset($_SERVER['HTTP_CLIENT_IP']) && $_SERVER["HTTP_CLIENT_IP"] ) { $realip = $_SERVER["HTTP_CLIENT_IP"]; } else { $realip = $_SERVER["REMOTE_ADDR"]; } } else { if ( getenv('HTTP_X_FORWARDED_FOR') ) { $realip = getenv('HTTP_X_FORWARDED_FOR'); } elseif ( getenv('HTTP_CLIENT_IP') ) { $realip = getenv('HTTP_CLIENT_IP'); } else { $realip = getenv('REMOTE_ADDR'); } } return $realip; } public static function getErrorName($err) { $errors = array( E_ERROR => 'ERROR', E_RECOVERABLE_ERROR => 'RECOVERABLE_ERROR', E_WARNING => 'WARNING', E_PARSE => 'PARSE', E_NOTICE => 'NOTICE', E_STRICT => 'STRICT', E_DEPRECATED => 'DEPRECATED', E_CORE_ERROR => 'CORE_ERROR', E_CORE_WARNING => 'CORE_WARNING', E_COMPILE_ERROR => 'COMPILE_ERROR', E_COMPILE_WARNING => 'COMPILE_WARNING', E_USER_ERROR => 'USER_ERROR', E_USER_WARNING => 'USER_WARNING', E_USER_NOTICE => 'USER_NOTICE', E_USER_DEPRECATED => 'USER_DEPRECATED', ); return $errors[$err]; } private static function set_error_handler($nodeHost, $nodePort) { set_error_handler(function ($errno, $errstr, $errfile, $errline) use($nodeHost, $nodePort) { $err = NodeLog::getErrorName($errno); /* if (!(error_reporting() & $errno)) { // This error code is not included in error_reporting return; } */ $log = array( NodeLog::getip(), "{$err} {$errfile}:{$errline}", nl2br($errstr) ); NodeLog::connect($nodeHost, $nodePort)->log($log); return false; }); } private static function register_exceptionHandler($nodeHost, $nodePort) { set_exception_handler(function($exception) use($nodeHost, $nodePort) { $exceptionName = get_class($exception); $message = $exception->getMessage(); $file = $exception->getFile(); $line = $exception->getLine(); $trace = $exception->getTraceAsString(); $msg = count($trace) > 0 ? "Stack trace:\n{$trace}" : null; $log = array( NodeLog::getip(), nl2br("Uncaught exception '{$exceptionName}' with message '{$message}' in {$file}:{$line}"), nl2br($msg) ); NodeLog::connect($nodeHost, $nodePort)->log($log); return false; }); } private static function register_shutdown_function($nodeHost, $nodePort) { register_shutdown_function(function() use($nodeHost, $nodePort) { $error = error_get_last(); if ($error['type'] == E_ERROR) { $err = NodeLog::getErrorName($error['type']); $log = array( NodeLog::getip(), "{$err} {$error['file']}:{$error['line']}", nl2br($error['message']) ); NodeLog::connect($nodeHost, $nodePort)->log($log); } echo NodeLog::connect($nodeHost, $nodePort)->end(); }); } private static $_defHost = self::NODE_DEF_HOST; private static $_defPort = self::NODE_DEF_PORT; /** * @param String $host * @param Integer $port * @return NodeLog */ public static function init($host = self::NODE_DEF_HOST, $port = self::NODE_DEF_PORT) { self::$_defHost = $host; self::$_defPort = $port; self::register_exceptionHandler($host, $port); self::set_error_handler($host, $port); self::register_shutdown_function($host, $port); $node = self::connect($host, $port); $node->start(); return $node; } private static $time; private static $mem; public function start() { self::$time = microtime(TRUE); self::$mem = memory_get_usage(); $log = array(NodeLog::getip(), "Start >>>> {$_SERVER['REQUEST_URI']}"); $this->log($log); } public function end() { $mem = (memory_get_usage() - self::$mem) / (1024 * 1024); $time = microtime(TRUE) - self::$time; $log = array(NodeLog::getip(), "End <<<< mem: {$mem} time {$time}"); $this->log($log); } } And of course the full code on gitHub: RealTimeMonitor
May 15, 2011
by Gonzalo Ayuso
· 28,998 Views
article thumbnail
Reset MySQL Root Password On Linux
Five easy steps to reset MySQL root password. Stop the MySQL server. Start the MySQL server with the --skip-grant-tables option. (it will not prompt for password) Connect to MySQL server as the root user. Setup new MySQL root password. Exit and restart the MySQL server. ### Shell Commands ### /etc/init.d/mysql stop mysqld_safe --skip-grant-tables & mysql -u root ### SQL Commands ### use mysql; UPDATE user SET password=PASSWORD("new-password") WHERE user='root'; flush privileges; exit ### Shell Commands ### /etc/init.d/mysql stop /etc/init.d/mysql start mysql -u root -pnew-password
May 13, 2011
by Artur Mkrtchyan
· 5,779 Views
article thumbnail
Practical PHP Testing Patterns: Transaction Rollback Teardown
Maintaining isolation of tests when they have a database as Shared Fixture is not a trivial task. An important constraint is not having the headache of keeping track what manipulations on the database has your code done; in that case the rollback may not even be performed in case of a regression. An alternative way to resetting the database via DELETE and TRUNCATE queries is to roll back a transaction which has been started in the setup phase during the teardown. Implementation The phases of a database test involving Transaction Rollback Teardown are roughly the following: begin transaction, usually in setUp(). arrange, act, assert actions in the various Test Methods. rollback of the transaction in teardown(). The active transaction is never committed. An issue with using this pattern is that code that already uses transaction is prone to generate errors, and ultimately should never be tested with this technique. The rules for your safety are simple: the SUT should never start a transaction or committing it. Some databases support nested transaction levels, but it's very brittle to use them for testing purposes, and in case of any failure the whole suite will blow up as test executes teardowns at the wrong level of nesting. This pattern safety is also difficult to ensure, as DDL statements like CREATE/DELETE or other commands may commit the current transaction automatically. Check the documentation of your testing database. The advantage of this pattern is great performance: rollback is faster than every other command, including TRUNCATE. Moreover, if you encapsulate transactions well in your production code, most of it won't commit them (typically leaving the control over the transaction to an upper layer). Doctrine 2 In a sense, we already use this pattern with an UnitOfWork ORM such as Doctrine 2 when we do not flush() the ORM in our code. The flow is: The database is ready by setup. Exercise code. Check results as persisted or removed entities. Instead of calling flush() over the Entity Manager, call clear(). In this case, the database never sees a transaction, as Doctrine 2 keeps everything in the Unit Of Work until you say to flush it. Even when your code is calling flush(), you can explicitly use beginTransaction() and rollback() over the connection object: in this other scenario, the testing database sees an open transaction, but it's never committed and can be discarded in teardown() like the pattern prescribes. Example The code sample is the same test case shown in the Table Truncation Teardown article, which now uses transactions to encapsulate the single tests. The various tests check the tables content is restored, along with the AUTOINCREMENT next value. exec('CREATE TABLE users ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(255) )'); } $this->connection = self::$sharedConnection; $this->connection->beginTransaction(); } public function teardown() { $this->connection->rollback(); } public function testTableCanBePopulated() { $this->connection->exec('INSERT INTO users (name) VALUES ("Giorgio")'); $this->assertEquals(1, $this->howManyUsers()); } public function testTableRestartsFrom1() { $this->assertEquals(0, $this->howManyUsers()); $this->connection->exec('INSERT INTO users (name) VALUES ("Isaac")'); $stmt = $this->connection->query('SELECT name FROM users WHERE id=1'); $result = $stmt->fetch(); $this->assertEquals('Isaac', $result['name']); } public function testTableIsEmpty() { $this->assertEquals(0, $this->howManyUsers()); } private function howManyUsers() { $stmt = $this->connection->query('SELECT COUNT(*) AS number FROM users'); $result = $stmt->fetch(); return $result['number']; } }
May 11, 2011
by Giorgio Sironi
· 7,384 Views
article thumbnail
10 Tricky Java Interview Questions
Here are some Java interview questions which are un-common What is the performance effect of a large number of import statements which are not used? Answer: They are ignored if the corresponding class is not used. Give a scenario where hotspot will optimize your code? Answer: If we have defined a variable as static and then initialized this variable in a static block then the Hotspot will merge the variable and the initialization in a single statement and hence reduce the code. What will happen if an exception is thrown from the finally block? Answer: The program will exit if the exception is not catched in the finally block. How does decorator design pattern works in I/O classes? Answer: The various classes like BufferedReader , BufferedWriter workk on the underlying stream classes. Thus Buffered* class will provide a Buffer for Reader/Writer classes. If I give you an assignment to design Shopping cart web application, how will you define the architecture of this application. You are free to choose any framework, tool or server? Answer: Usually I will choose a MVC framework which will make me use other design patterns like Front Controller, Business Delegate, Service Locater, DAO, DTO, Loose Coupling etc. Struts 2 is very easy to configure and comes with other plugins like Tiles, Velocity and Validator etc. The architecture of Struts becomes the architecture of my application with various actions and corresponding JSP pages in place. What is a deadlock in Java? How will you detect and get rid of deadlocks? Answer: Deadlock exists when two threads try to get hold of a object which is already held by another object. Why is it better to use hibernate than JDBC for database interaction in various Java applications? Answer: Hibernate provides an OO view of the database by mapping the various classes to the database tables. This helps in thinking in terms of the OO language then in RDBMS terms and hence increases productivity. How can one call one constructor from another constructor in a class? Answer: Use the this() method to refer to constructors. What is the purpose of intern() method in the String class? Answer: It helps in moving the normal string objects to move to the String literal pool How will you make your web application to use the https protocol? Answer: This has more to do with the particular server being used than the application itself. Here is how it can be done on tomcat: http://tomcat.apache.org/tomcat-4.1-doc/ssl-howto.html From http://extreme-java.blogspot.com/2011/05/10-tricky-java-interview-questions.html
May 10, 2011
by Sandeep Bhandari
· 101,519 Views · 1 Like
article thumbnail
Practical PHP Testing Patterns: Stored Procedure Test
It happened in the day before the advent of DDD and the Hexagonal architecture, that you had code that lived inside the database, such as Stored Procedures, constraints, and triggers. Back in the day, the relational database was considered the single source of truth instead of a Domain Model written in a language like PHP or Java. Today the picture is different - but there are still scenarios where pushing code in the database make sense. One of the reasons for having logic expressed as SQL and in other database languages is their power, and their performance. SQL operators, especially when augmented by proprietary extensions, let you declare pieces of logic that you would instead have to code by hand. SQL that is executed directly on the database can accomplish operations too onerous to perform over a reconstituted object graph with a subsequent saving. In fact, every decent ORM include a language for batch updates that translates to SQL, like Doctrine with DQL; and also a mechanism for providing hints for the underlying database, like indexes definitions. The problem with SQL derivates and other database-specific embedded logic is that we cannot execute it and test it in isolation - we need a real copy of the database to perform our tests. Thus the Stored Procedure Test is an umbrella term for tests that encompasses database code, even when they're not actually stored procedures. When I'll use the term stored procedure in this article, it will be to signify any database-specific code, such as complex queries, triggers and so on. Implementation The pattern prescribes to write unit tests for the stored procedure, to test it in isolation from the rest of application a first simplification. These tests cover nontrivial logic in database code - probably you don't need them for indexes definition, but more for queries with aggregate functions. In the PHP world, Sqlite often suffices for testing queries - as long as you have an intermediate layer like Doctrine DBAL (part of Doctrine 2) which smooths out the differences between vendors. You use MySQL in production, Sqlite in the test suite, and you can write queries in Doctrine's DQL being confident that it will translate them to the right SQL dialect. These tests should be executed in a sandbox - a database with just enough structure and data to test the stored procedure at hand. This sandbox should run by definition on the production dbms. The most difficult aspect of the pattern is integrating with the dbms: it should be running and listening on the right port. A sandbox should be created in setUp() or setUpBeforeClass(), and destroyed during teardown. In case the database is not available, the tests should be marked as skipped or incomplete. Variations In In-Database Stored Procedure Tests, the test is written in the same language as the database code. I cannot imagine something more boring for a PHP programmer. In Remoted Stored Procedure Tests, which is the variation of interest for us, the tests are written in PHPUnit and integrated with the suite (slowing it down a bit). The logic is that whatever SQL logic you're going to add to your application, is already encapsulated in some PHP class: for example, complex queries are encapsulated in Repositories or DAO. So it's going to be feasible to build a sandbox via PHP code, and test the stored procedure as a black box. It will be encapsulated for a unique execution, like schema creation, or for executing multiple times in case of queries. Example The example shows you how to test a query with a real database - supposing a surrogate database does not support all the needed functions - from inside a test suite. I thought it would be difficult to write this test, but instead it required less than a Pomodoro. connection = new PDO("mysql:host=localhost;dbname=sandbox", 'root', ''); $this->connection->exec("CREATE TABLE users (name VARCHAR(255) NOT NULL PRIMARY KEY, year YEAR)"); $this->repository = new UserRepository($this->connection, 2011); } public function testAverageAgeIsCalculated() { $this->insertUser('Giorgio', 1942); $this->insertUser('Isaac', 1920); $this->assertEquals(80, $this->repository->getAverageAge()); } private function insertUser($name, $year) { $stmt = $this->connection->prepare("INSERT INTO users (name, year) VALUES (:name, :year)"); $stmt->bindValue('name', $name, PDO::PARAM_STR); $stmt->bindValue('year', $year, PDO::PARAM_INT); return $stmt->execute(); } public function tearDown() { $this->connection->exec('DROP TABLE users'); } } class UserRepository { private $connection; private $currentYear; public function __construct(PDO $connection, $currentYear) { $this->connection = $connection; $this->currentYear = $currentYear; } /** * We suppose AVG() cannot be correctly implemented by Sqlite or * another surrogate database (substitute another vendor feature * for the same effect). * We also suppose reconstituting millions of User objects to calculate * their average age isn't feasible: that's why we used SQL directly. */ public function getAverageAge() { $stmt = $this->connection->prepare('SELECT AVG(:year - year) AS average_age FROM users'); $stmt->bindValue('year', $this->currentYear, PDO::PARAM_INT); $stmt->execute(); $row = $stmt->fetch(); return $row['average_age']; } }
May 4, 2011
by Giorgio Sironi
· 2,564 Views
article thumbnail
Gant 1.9.5 released
There was a regression in the 1.9.4 release of Gant that could not be ignored. The problem has been corrected, and a new release candidate checked by the people who found the regression (thanks due to Jeff Brown and the Grails folk). There is therefore a shiny new Gant 1.9.5 release. Everything should be in place at Codehaus already and Maven will get updated as soon as the Codehaus -> Maven sync happens. The regression whilst essentially trivial, was sufficiently fundamental that I have taken the drastic step of removing the 1.9.4 release from everywhere. This should have no effect on people using Groovy 1.7 or 1.8, they should use Gant 1.9.5. People still using Groovy 1.6 will not see a Gant 1.9.5 and will have to stay weith Gant 1.9.3. Of course people still using Groovy 1.6 should upgrade to 1.8.0 immediately and therefore not have a problem with Gant :-)
May 3, 2011
by Russel Winder
· 215 Views
article thumbnail
Why does Void class exist in JDK
I always try to bring some thing new and useful on this blog. This time we will understand the Void.class (which in itself looks something tricky) present in rt.jar. One can consider the java.lang.Void class as a wrapper for the keyword void. Some developers draw the analogy with the primitive data types int, long, short and byte etc. which have the wrapper classes as Integer, Long, Short and Byte receptively. But it should be kept in mind that unlike those wrappers Void class doesn't store a value of type void in itself and hence is not a wrapper in true essence. Purpose: The Void class according to javadoc exists because of the fact that some time we may need to represent the void keyword as an object. But at the same point we cannot create an instance of the Void class using the new operator. This is because the constructor in Void has been declared as private. Moreover the Void class is a final class which means that there is no way we can inherit this class. So the only purpose that remains for the existence of the Void class is reflection, where we can get the return type of a method as void. The following piece of code will demonstrate this purpose: public class Test { public static void main(String[] args) throws SecurityException, NoSuchMethodException { Class c1 = Test1.class.getMethod("Testt",null).getReturnType(); System.out.println(c1 == Void.TYPE); System.out.println(c1 == Void.class); } } class Test1{ public void Testt(){} } One can also use Void class in Generics to specify that you don't care about the specific type of object being used. For example: List list1; From http://extreme-java.blogspot.com/2011/04/void-class-java.html
May 3, 2011
by Sandeep Bhandari
· 23,436 Views · 2 Likes
article thumbnail
Reasons for Slow Database Performance
Usually there are scenarios where the application does not perform as expected. A simple web page which fetches data from database and displays optimizes it for mobiles should be fast and turnaround times should be less than 30 seconds on a good network connection. But still there are cases where these kinds of applications suffer the most performance issues. This is because the database in these cases is not designed by giving proper attention to the application requirements. You can change one application design even after delivery but changing the database design once a number of application have been integrated with it is like explosion. Here I am giving some points which should be kept in mind while designing application and database. Only basic idea is being provided. For details one can search each topic on the web as each point expands well to multiple articles. 1) Bind Variables: When a SQL query is sent to the database engine for processing and sending the result, it is compiled by the database compiler to get the tokens of the query. This involves parsing, optimizing and identifying the query. After a number of steps, the SQL query is passed to the database engine for processing. In a small application with a user base of less than 500, it is usually the same query which is executed more often than others. The use of bind variables helps in storing the compiled query once and executing it with different data at different times. For using bind variables, one needs to use PreparedStatement objects in Java. 2) Query is not well formed: Usually the same SQL query can be written in multiple ways. There are ways by which a query can be optimized to give the best performance. The corresponding SQL construct should be chosen depending upon requirement. I have scenarios where people have used WHERE clause instead of GROUP BY and are complaining of poor response times. Similarly Sub queries and Joins complement each other. 3) Database structure is not well defined/normalized: This is probably known to everybody that the database tables should be properly normalized as this is part of every DBMS course at graduation level. If the tables are not properly designed and normalized, anomalies set in. 4) Proper caching is not in place: Many applications make use of temporary caches on the application server to store the reference data or frequently accessed data as memory is less of an issue than the time with new generation servers. 5) Number of rows in the table too large: If the table itself has too much of data then the queries will take time to execute. Partitioning a table into multiple tables is recommended in these situations. For example: If a table has employee records of 1000000 employees then it could be split into 5 small tables each having 200000 rows. The advantage is we know beforehand in which smaller table to look for a particular employee code as the division of large table can be done on the employee id column. 6) Connections are not being pooled: If connections are not pooled then the each time a new connection is requested for a request to database. Maintaining a connection pool is much better than creating and destroying the connection for executing every SQL query. Of course, there are frameworks like Hibernate which take care of creating the connection pools and also allow the customization of these pools 7) Connections not closed/returned to pool in case of exceptions: When an exception occurs while performing database operations, it ought to be caught. Usually catching the exception is not the issue because SQLException is a checked exception but closing the connection is something that most of the times is left out. If the connection is not released, the same connection cannot be used for any other purpose till the connection is timed out. 8) Stored procedures for complex computations on database: Stored procedures are a good way to perform database intensive operations. This is because they are already compiled and there is less network trips for getting the same results as compared to SQL queries. From http://extreme-java.blogspot.com/2011/04/reasons-for-slow-database-performance.html
April 30, 2011
by Sandeep Bhandari
· 59,455 Views · 1 Like
article thumbnail
Fun With WebMatrix Helpers in ASP.NET MVC 3
it’s nearly the weekend, and it’s been long week, so let’s have a bit of fun… so i take it you’ve all had a chance to look at webmatrix? if you haven’t then you should really make the effort to have a play around with it. i will admit that i was a bit snobby about it when i first read about it, but it is actually a really great way to build small, simple web sites. for the uninitiated, here are a few useful links: the official site http://www.microsoft.com/web/webmatrix/ scott guthrie http://weblogs.asp.net/scottgu/archive/2010/07/06/introducing-webmatrix.aspx rob conery http://blog.wekeroad.com/microsoft/someone-hit-their-head scott hanselman http://www.hanselman.com/blog/hanseminutespodcast249onwebmatrixwithrobconery.aspx you can install webmatrix through the web platform installer . it has loads of really great features, which are much better explained in the links above than i could ever do, but the one that hit me immediately as being really fun was the webmatrix helpers feature and i wondered if there was a way to use them in mvc. well, webmatrix is built on asp.net, so it is actually a trivial task to add them to an asp.net mvc application. getting going there are loads of helpers available, today we are going to look at the microsoft web helpers. this package gives you the ability to quickly and easily add basic functionality for services such as twitter, bing, gravatar, facebook, google analytics and xbox live to your site. the awesomeness of nuget makes it easy to add the microsoft web helpers package to our project by typing: install-package microsoft-web-helpers into the the package manager console. next you will need to add references to webmatrix.data and webmatrix.webdata and set the “copy local” property to true for both of them. if you fail to do this step you will receive a compilation error, “the type or namespace name ‘simplemembershipprovider’ could not be found” in app_code\facebook.cshtml. and that’s it! you are now ready to start using the web helpers in your views. so let’s have a look at a few of them… gravatar you can add a gravatar image to your page by simply using the gravatar.gethtml() method in your razor view. for example: @gravatar.gethtml("stevelydford@gmail.com") displays this handsome chap (and no the glasses and bandito moustache are not real!): if the email address supplied to the method doesn’t have a corresponding gravatar account the default gravatar image will be returned, i.e. but you can also set the default image to be the url of any image you desire. for example, the following code: @gravatar.gethtml ("you@me.com", defaultimage: "http://blog.stevelydford.com/content/nograv.jpg") returns this stunning example of programmer art : optional parameters allow you to set the size of the image, the gravatar rating and a couple of other attributes . xbox live gamercard this is a very simple method which returns an xbox live gamercard. it has one parameter which is a string containing the required xbox live gametag. for example: @gamercard.gethtml("stinky53") returns this: which does nothing if not prove that i don’t get enough time to work on my gamerscore! microsoft bing web helpers has a bing class that enables you to easily let users google your site with bing . first, we need to add the following code to our razor view: @{ bing.sitetitle = ".net web stuff, mostly"; bing.siteurl = "http://blog.stevelydford.com"; } we can then use either the bing.searchbox() or bing.advancedsearchbox() methods to display a bing search box in our view. @bing.searchbox() displays a bing search box which takes the user to bing.com to displays it’s results: @bing.advancedsearchbox() displays a bing search box which renders a on your page containing the search results: analytics the analytics class of microsoft.web.helpers contains methods which generate scripts that enable a page to be tracked by google analytics, yahoo marketing solutions and/or statcounter. they all work in a very similar way and just require you to pass the method the relevant account details. for example: @analytics.getgooglehtml({your-analytics-webpropertyid-here}) @analytics.getyahoohtml({your-yahoo-accountid-here}) this will inject the necessary javascript into your view at runtime to enable tracking by the relevant service. twitter the microsoft.web.helpers namespace contains a twitter class with two methods – twitter.profile() and twitter.search(). twitter.profile() injects some javascript into your view which displays the feed for the twitter user specified in the username parameter: @twitter.profile("stevelydford") there are a whole raft of parameters, which allow you to customise the output in various ways, such as setting the width and height, colors, number of tweets returned, etc. a full list of these parameters can be found here . twitter.search() displays the twitter search results for the search string specified in the searchquery parameter: @twitter.search("london 2012") again, there are a bunch of optional parameters to allow you to customize the output to your requirements. when you use nuget to install the microsoft-web-helpers package a twittergoodies razor file is added to your app_code folder. this class contains helpers which provide additional twitter functionality. these helpers include twittergoodies.tweetbutton(), twittergoodies.followbutton(), twittergoodies.faves() and twittergoodies.list(), all of which can have their outputs customised using various optional parameters: @twittergoodies.tweetbutton (tweettext: "i'm reading steve lydford's blog", url:"http://blog.stevelydford.com") displays a tweet button which opens a new window to allow the user to send a tweet about your site. the url passed to the helper is automatically shortened using the twitter t.co url shortner: @twittergoodies.followbutton("stevelydford") displays a button which redirects them to twitter: @twittergoodies.list("stevelydford", "f1-4") displays a form which shows a public twitter list: there are a few other methods in the twittergoodies razor file, which you can view in app_code/twittergoodies.cshtml. facebook as well as the twittergoodies.cshtml page, the microsoft-web-helpers package also installs facebook.cshtml to your app_code directory. this file contains many useful facebook helpers. i will look at a couple here, a full list can be found on the facebookhelper codeplex site . one of the easiest to use out of the box is the facebook.likebutton() helper, which displays a ‘like’ button that either automatically ‘likes’ the url supplied, or opens a new facebook window ’on click’ if the user is not currently signed in: @facebook.likebutton("http://blog.stevelydford.com") next up is facebook.activityfeed() which displays stories when users ‘like’ content on a site or share content from a site on facebook. @facebook.acivityfeed("http://www.bbc.co.uk") most of the rest of the facebook helpers require initialization. in order to do this you will require a facebook application id. you can get one by browsing to http://www.facebook.com/developers/createapp.php and creating a new facebook application: when you are setting up your app ensure that you enter the url of your site, including the correct port number if you are working on localhost: you can then add the following code to your razor view to initialize: @{ facebook.initialize("{your-application-id-here}", "{your-application-secret-here}"); } then it’s just a matter of adding a couple of lines of code to the view to add facebook comments to the page: @facebook.getinitializationscripts() @facebook.comments() or, to show a facebook livestream to allow users of your site to communicate in real time: @facebook.livestream() conclusion this post shows just a small fraction of what can be achieved very quickly and very easily using webmatrix web helpers in an mvc application. the microsoft web helpers package makes it incredibly easy to add a whole load of functionality to you site for very little effort. have fun! let me know how you get on. go play….
April 29, 2011
by Steve Lydford
· 27,891 Views
article thumbnail
Modules and namespaces in JavaScript
JavaScript does not come with support for modules. This blog post examines patterns and APIs that provide such support. It is split into the following parts: Patterns for structuring modules. APIs for loading modules asynchronously. Related reading, background and sources. 1. Patterns for structuring modules A module fulfills two purposes: First, it holds content, by mapping identifiers to values. Second, it provides a namespace for those identifiers, to prevent them from clashing with identifiers in other modules. In JavaScript, modules are implemented via objects. Namespacing: A top-level module is put into a global variable. That variable is the namespace of the module content. Holding content: Each property of the module holds a value. Nesting modules: One achieves nesting by putting a module inside another one. Filling a module with content Approach 1: Object literal. var namespace = { func: function() { ... }, value: 123 }; Approach 2: Assigning to properties. var namespace = {}; namespace.func = function() { ... }; namespace.value = 123; Accessing the content in either approach: namespace.func(); console.log(namespace.value + 44); Assessment: Object literal. Pro: Elegant syntax. Con: As a single, sometimes very long syntactic construct, it imposes constraints on its contents. One must maintain the opening brace before the content and the closing brace after the content. And one must remember to not add a comma after the last property value. This makes it harder to move content around. Assigning to properties. Con: Redundant repetitions of the namespace identifier. The Module pattern: private data and initialization In the module pattern, one uses an Immediately-Invoked Function Expression (IIFE, [1]) to attach an environment to the module data. The bindings inside that environment can be accessed from the module, but not from outside. Another advantage is that the IIFE gives you a place to perform initializations. var namespace = function() { // set up private data var arr = []; // not visible outside for(var i=0; i<4; i++) { arr.push(i); } return { // read-only access via getter get values() { return arr; } }; }(); console.log(namespace.values); // [0,1,2,3] Comments: Con: Harder to read and harder to figure out what is going on. Con: Harder to patch. Every now and then, you can reuse existing code by patching it just a little. Yes, this breaks encapsulation, but it can also be very useful for temporary solutions. The module pattern makes such patching impossible (which may be a feature, depending on your taste). Alternative for private data: use a naming convention for private properties, e.g. all properties whose names start with an underscore are private. Variation: Namespace is a function parameter. var namespace = {}; (function(ns) { // (set up private data here) ns.func = function() { ... }; ns.value = 123; }(namespace)); Variation: this as the namespace identifier (cannot accidentally be assigned to). var namespace = {}; (function() { // (set up private data here) this.func = function() { ... }; this.value = 123; }).call(namespace); // hand in implicit parameter "this" Referring to sibling properties Use this. Con: hidden if you nest functions (which includes methods in nested objects). var namespace = { _value: 123; // private via naming convention getValue: function() { return this._value; } anObject: { aMethod: function() { // "this" does not point to the module here } } } Global access. Cons: makes it harder to rename the namespace, verbose for nested namespaces. var namespace = { _value: 123; // private via naming convention getValue: function() { return namespace._value; } } Custom identifier: The module pattern (see above) enables one to use a custom local identifier to refer to the current module. Module pattern with object literal: assign the object to a local variable before returning it. Module pattern with parameter: the parameter is the custom identifier. Private data and initialization for properties An IFEE can be used to attach private data and initialization code to an object. It can do the same for a single property. var ns = { getValue: function() { var arr = []; // not visible outside for(var i=0; i<4; i++) { arr.push(i); } return function() { // actual property value return arr; }; }() }; Read on for an application of this pattern. Types in object literals Problem: A JavaScript type is defined in two steps. First, define the constructor. Second, set up the prototype of the constructor. These two steps cannot be performed in object literals. There are two solutions: Use an inheritance API where constructor and prototype can be defined simultaneously [4]. Wrap the two parts of the type in an IIFE: var ns = { Type: function() { var constructor = function() { // ... }; constructor.prototype = { // ... }; return constructor; // value of Type }() }; Managing namespaces Use the same namespace in several files: You can spread out a module definition across several files. Each file contributes features to the module. If you create the namespace variable as follows then the order in which the files are loaded does not matter. Note that this pattern does not work with object literals. var namespace = namespace || {}; Nested namespaces: With multiple modules, one can avoid a proliferation of global names by creating a single global namespace and adding sub-modules to it. Further nesting is not advisable, because it adds complexity and is slower. You can use longer names if name clashes are an issue. var topns = topns || {}; topns.module1 = { // content } topns.module2 = { // content } YUI2 uses the following pattern to create nested namespaces. YAHOO.namespace("foo.bar"); YAHOO.foo.bar.doSomething = function() { ... }; 2. APIs for loading modules asynchronously Avoiding blocking: The content of a web page is processed sequentially. When a script tag is encountered that refers to a file, two steps happen: The file is downloaded. The file is interpreted. All browsers block the processing of subsequent content until (2) is finished, because everything is single-threaded and must be processed in order. Newer browsers perform some downloads in parallel, but rendering is still blocked [2]. This unnecessarily delays the initial display of a page. Modern module APIs provide a way around this by supporting asynchronous loading of modules. There are usually two parts to using such APIs: First one specifies what modules one would like to use. Second, one provides a callback that is invoked once all modules are ready. The goal of this section is not to be a comprehensive introduction, but rather to give you an overview of what is possible in the design space of JavaScript modules. 2.1. RequireJS RequireJS has been created as a standard for modules that work both on servers and in browsers. The RequireJS website explains the relationship between RequireJS and the earlier CommonJS standard for server-side modules [3]: CommonJS defines a module format. Unfortunately, it was defined without giving browsers equal footing to other JavaScript environments. Because of that, there are CommonJS spec proposals for Transport formats and an asynchronous require. RequireJS tries to keep with the spirit of CommonJS, with using string names to refer to dependencies, and to avoid modules defining global objects, but still allow coding a module format that works well natively in the browser. RequireJS implements the Asynchronous Module Definition (formerly Transport/C) proposal. If you have modules that are in the traditional CommonJS module format, then you can easily convert them to work with RequireJS. RequireJS projects have the following file structure: project-directory/ project.html legacy.js scripts/ main.js require.js helper/ util.js project.html: My Sample Project main.js: helper/util is resolved relative to data-main. legacy.js ends with .js and is assumed to not be in module format. The consequences are that its path is resolved relative to project.html and that there isn’t a function parameter to access its (module) contents. require(["helper/util", "legacy.js"], function(util) { //This function is called when scripts/helper/util.js is loaded. require.ready(function() { //This function is called when the page is loaded //(the DOMContentLoaded event) and when all required //scripts are loaded. }); }); Other features of RequireJS: Specify and use internationalization data. Load text files (e.g. to be used for HTML templating) Use JSONP service results for initial application setup. 2.2. YUI3 Version 3 of the YUI JavaScript framework brings its own module infrastructure. YUI3 modules are loaded asynchronously. The general pattern for using them is as follows. YUI().use('dd', 'anim', function(Y) { // Y.DD is available // Y.Anim is available }); Steps: Provide IDs “dd” and “anim” of the modules you want to load. Provide a callback to be invoked once all modules have been loaded. The parameter Y of the callback is the YUI namespace. This namespace contains the sub-namespaces DD and Anim for the modules. As you can see, the ID of a module and its namespace are usually different. Method YUI.add() allows you to register your own modules. YUI.add('mymodules-mod1', function(Y) { Y.namespace('mynamespace'); Y.mynamespace.Mod1 = function() { // expose an API }; }, '0.1.1' // module version ); YUI includes a loader for retrieving modules from external files. It is configured via a parameter to the API. The following example loads two modules: The built-in YUI module dd and the external module yui_flot that is available online. YUI({ modules: { yui2_yde_datasource: { // not used below fullpath: 'http://yui.yahooapis.com/datasource-min.js' }, yui_flot: { fullpath: 'http://bluesmoon.github.com/yui-flot/yui.flot.js' } } }).use('dd', 'yui_flot', function(Y) { // do stuff }); 2.3. Script loaders Similarly to RequireJS, script loaders are replacements for script tags that allow one to load JavaScript code asynchronously and in parallel. But they are usually simpler than RequireJS. Examples: LABjs: a relatively simple script loader. Use it instead of RequireJS if you need to load scripts in a precise order and you don't need to manage module dependencies. Background: “LABjs & RequireJS: Loading JavaScript Resources the Fun Way” describes the differences between LABjs and RequireJS. yepnope: A fast script loader that allows you to make the loading of some scripts contingent on the capabilities of the web browser. 3. Related reading, background and sources Related reading: A first look at the upcoming JavaScript modules Background: JavaScript variable scoping and its pitfalls Loading Scripts Without Blocking CommonJS Modules Lightweight JavaScript inheritance APIs Main sources of this post: Namespacing in JavaScript YUI2: A JavaScript Module Pattern YUI3: YUI Global Object How to get started with RequireJS From http://www.2ality.com/2011/04/modules-and-namespaces-in-javascript.html
April 29, 2011
by Axel Rauschmayer
· 18,067 Views
article thumbnail
Bootstrapping CDI in several environments
i feel like writing some posts about cdi (contexts and dependency injection). so this is the first one of a series of x posts ( 0 javax.enterprise cdi-api 1.0 provided an empty beans.xml will do to enable cdi you must have a beans.xml file in your project (under the meta-inf or web-inf). that’s because cdi needs to identify the beans in your classpath (this is called bean discovery) and build its internal metamodel. with the beans.xml file cdi knows it has beans to discover. so, for all the following examples i’ll make it simple and will leave this file completely empty. java ee 6 containers let’s start with the easiest possible environment : java ee 6 containers . why is it the simplest ? well, because you don’t have to do anything : cdi is part of java ee 6 as well as the web profile 1.0 so you don’t need to manually bootstrap it. let’s see how to inject a cdi bean within an ejb 3.1 and a servlet 3.0 . ejb 3.1 since ejb 3.1 you can use the ejbcontainer api to get an in-memory embedded ejb container and you can easily unit test your ejbs. so let’s write an ejb and a test class. first let’s have a look at the code of the ejb. as you can see, with version 3.1 an ejb is just a pojo : no inheritance, no interface, just one @stateless annotation. it gets a reference of the hello bean buy using the @inject annotation and uses it in the saysomething() method. @stateless public class mainejb31 { @inject hello hello; public string saysomething() { return hello.sayhelloworld(); } } you can now package the mainejb31, hello and world classes with the empty beans.xml file into a jar, deploy it to glassfish 3.x , and it will work. but if you don’t want to bother deploying it to glassfish and just unit test it, this is what you need to do : public class mainejbtest { private static ejbcontainer ec; private static context ctx; @beforeclass public static void initcontainer() throws exception { map properties = new hashmap(); properties.put(ejbcontainer.modules, new file("target/classes")); ec = ejbcontainer.createejbcontainer(properties); ctx = ec.getcontext(); } @afterclass public static void closecontainer() throws exception { if (ec != null) ec.close(); } @test public void shoulddisplayhelloworld() throws exception { // looks up the ejb mainejb31 mainejb = (mainejb31) ctx.lookup("java:global/classes/mainejb!org.antoniogoncalves.cdi.helloworld.mainejb"); assertequals("should say hello world !!!", "hello world !!!", mainejb.saysomething()); } } in the code above the method initcontainer() initializes the ejbcontainer. the shoulddisplayhelloworld() looks up the ejb (using the new portable jndi name ), invokes it and makes sure the saysomething() method returns hello world !!!. green test. that was pretty easy too. servlet 3.0 servlet 3.0 is part of java ee 6, so again, there is no needed configuration to bootstrap cdi. let’s use the new @webservlet annotation and write a very simple one that injects a reference of hello and displays an html page with hello world !!!. this is what the servlet looks like : @webservlet(urlpatterns = "/mainservlet") public class mainservlet30 extends httpservlet { @inject hello hello; @override protected void service(httpservletrequest req, httpservletresponse resp) throws servletexception, ioexception { resp.setcontenttype("text/html"); printwriter out = resp.getwriter(); out.println(""); out.println(""); out.println(""); out.println(saysomething()); out.println(""); out.println(""); out.close(); } public string saysomething() { return hello.sayhelloworld(); } } thanks to the @webservlet i don’t need any web.xml (it’s optional in servlet 3.0) to map the mainservlet30 to the /mainservlet url. you can now package the mainservlet30, hello and world classes with the empty beans.xml and no web.xml into a war, deploy it to glassfish 3.x , go to http://localhost:8080/bootstrapping-servlet30-1.0/mainservlet and it will work. unfortunately servlet 3.0 doesn’t have an api for the container (such as ejbcontainer). there is no servletcontainer api that would let you use an embedded servlet container in a standard way and, why not, easily unit test it. application client container not many people know it, but java ee (or even older j2ee versions) comes with an application client container (acc). it’s like an ejb or servlet container but for plain pojos. for example you can develop a swing application (yes, i’m sure that some of you still use swing), run it into the acc and get some extra services given by the container (security, naming, certain annotations…). glassfish v3 has an acc that you can launch in a command line : appclient -jar . so i thought, great, i can use cdi with acc the same way i use it within ejb or servlet container, no need to bootstrap anything, it’s all out of the box. i was wrong . as per the cdi specification (section 12.1), cdi is not required to support application client bean archives. so the glassfish application client container doesn’t support it. i haven’t tried the jboss acc , maybe it works. other containers the beauty of cdi is that it doesn’t require java ee 6 . you can use cdi with simple pojos in a java se environment, as well as some servlet 2.5 containers. of course it’s not as easy to bootstrap because you need a bit of configuration. but it then works fine (not always but). java se 6 ok, so until now there was nothing to do to bootstrap cdi. it is already bundled with the ejb 3.1 and servlet 3.0 containers of java ee 6 (and web profile). so the idea here is to use cdi in a simple java se environment. coming back to our hello and world classes, we need a pojo with an entry point that will bootstrap cdi so we can use injection to get those classes. in standard java se when we say entry point , we think of a public static void main(string[] args) method. well, we need something similar… but different. weld is the reference implementation of cdi. that means it implements the specification, the standard apis (mostly found in javax.inject and javax.enterprise.context packages) but also some proprietary code (in org.jboss.weld package). bootstrapping cdi in java se is not specified so you will need to use specific weld features. you can do that in two different flavors: by observing the containerinitialized event or using the programatic bootstrap api consisting of the weld and weldcontainer classes. the following code uses the containerinitialized event. as you can see, it uses the @observes annotation that i’ll explain in a future post. but the idea is that this class is listening to the event and processes the code once the event is triggered. import org.jboss.weld.environment.se.events.containerinitialized; import javax.enterprise.event.observes; import javax.inject.inject; public class mainjavase6 { @inject hello hello; public void saysomething(@observes containerinitialized event) { system.out.println(hello.sayhelloworld()); } } but who trigers the containerinitialized event ? well, it’s the org.jboss.weld.environment.se.startmain class. i’m using maven so a nice trick is to use the exec-maven-plugin to run the startmain class. download the code , have a look at the pom.xml and give it a try. the other possibility is to programmatically bootstrap the weld container. this can be handy in unit testing. the code below initializes the weld container (with new weld().initialize()) and then looks for the hello class (using weld.instance().select(hello.class).get()). import org.jboss.weld.environment.se.weld; import org.jboss.weld.environment.se.weldcontainer; import org.junit.beforeclass; import org.junit.test; import static junit.framework.assert.assertequals; public class hellotest { @test public void shoulddisplayhelloworld() { weldcontainer weld = new weld().initialize(); hello hello = weld.instance().select(hello.class).get(); assertequals("should say hello world !!!", "hello world !!!", hello.sayhelloworld()); } } execute the test with mvn test and it should be green. as you can see, there is a bit more work using cdi in a java se environment, but it’s not that complicated. tomcat 6.x ok, and what about your legacy servlet 2.5 containers ? the first one that comes in mind is tomcat 6.x ( note that tomcat 7.x will implement servlet 3.0 but is still in beta version at the time of writing this post ). weld provides support for tomcat but you need to configure it a bit to make cdi work. first of all, this is a servlet 2.5, not a 3.0. so the code of the servlet is slightly different from the one seen before (no annotation allowed) and of course, you need your good old web.xml file : public class mainservlet25 extends httpservlet { @inject hello hello; @override protected void service(httpservletrequest req, httpservletresponse resp) throws servletexception, ioexception { resp.setcontenttype("text/html"); printwriter out = resp.getwriter(); out.println(""); out.println(""); out.println(""); out.println(saysomething()); out.println(""); out.println(""); out.close(); } public string saysomething() { return hello.sayhelloworld(); } } because we don’t have a @webservlet annotation in servlet 2.5, we need to declare and map it in the web.xml (using the servlet and servlet-mapping tags). then, you need to explicitly specify the servlet listener to boot weld and control its interaction with requests (org.jboss.weld.environment.servlet.listener). tomcat has a read-only jndi, so weld can’t automatically bind the beanmanager extension spi. to bind the beanmanager into jndi, you should populate meta-inf/context.xml and make the beanmanager available to your deployment by adding it to your web.xml: mainservlet25 org.antoniogoncalves.cdi.bootstrapping.servlet.mainservlet25 mainservlet25 /mainservlet org.jboss.weld.environment.servlet.listener beanmanager javax.enterprise.inject.spi.beanmanager the meta-inf/context.xml file is an optional file which contains a context for a single tomcat web application. this can be used to define certain behaviours for your application, jndi resources and other settings. package all the files (mainservlet25, hello, world, meta-inf/context.xml, beans.xml and web.xml) into a war and deploy it into tomcat 6.x. go to http://localhost:8080/bootstrapping-servlet25-tomcat-1.0/mainservlet and you will see your hello world page. jetty 6.x another famous servlet 2.5 containers is jetty 6.x (at codehaus) and jetty 7.x ( note that jetty 8.x will implement servlet 3.0 but it’s still in experimental stage at the time of writing this post ). if you look at the weld documentation, there is actually support for jetty 6.x and 7.x . the code is the same one as tomcat (because it’s a servlet 2.5 container), but the configuration changes. with jetty you need to add two files under web-inf : jetty-env.xml and jetty-web.xml : beanmanager javax.enterprise.inject.spi.beanmanager org.jboss.weld.resources.managerobjectfactory true package all the files (mainservlet25, hello, world, web-inf/jetty-env.xml, web-inf/jetty-web.xml, beans.xml and web.xml) into a war and deploy it into jetty 6.x. go to http://localhost:8080/bootstrapping-servlet25-jetty6/mainservlet and you will see your hello world page. there was a mistake in the weld documentation so i couldn’t make it work. i started a thread on the weld forum and thanks to dan allen , pete muir and all the weld team, this was fixed and i managed to make it work. simple as posting an email to the forum . thanks for your help guys. spring 3.x here is the tricky part. spring 3.x implements the jsr 330 : dependency injection for java , which means that @inject works out of the box. but i didn’t find a way to integrate cdi with spring 3.x . the weld documentation mentions that because of its extension points, “ integration with third-party frameworks such as spring (…) was envisaged by the designers of cdi “. i did find this blog that simulates cdi features by enabling spring ones. what i didn’t find is a clear statement or roadmap on springsource about supporting cdi or not in future releases. the last trace of this topic is a comment on a long tss flaming thread . at that time (16 december 2009), juergen huller said “ with respect to implementing cdi on top of spring (…) trying to hammer it into the semantic frame of another framework such as cdi would be an exercise that is certainly achievable (…) but ultimately pointless “. but if you have any fresh news about it, let me know. conclusion as i said, this post is not about explaining cdi, i’ll do that in future posts. i just wanted to focus on how to bootstrap it in several environments so you can try by yourself. as you saw, it’s much simpler to use cdi within an ejb 3.1 or servlet 3.0 container in java ee 6. i’ve used glassfish 3.x but it should also work with other java ee 6 or web profile containers such as jboss 6 or resin . when you don’t use java ee 6, there is a bit more work to do. depending on your environment or servlet container you need some configuration to bootstrap weld. by the way, i’ve used weld because it’s the reference implementation, the one bunddled with glassfish and jboss. but you could also use openwebbeans , another cdi implementation. download the code , give it a try, and give me some feedback. from http://agoncal.wordpress.com/2011/01/12/bootstrapping-cdi-in-several-environments/
April 28, 2011
by Antonio Goncalves
· 31,039 Views
article thumbnail
Just say no to swapping!
Imagine you love to cook; it's an intense hobby of yours. Over time, you've accumulated many fun spices, but your pantry is too small, so, you rent an off-site storage facility, and move the less frequently used spice racks there. Problem solved! Suddenly you decide to cook this great new recipe. You head to the pantry to retrieve your Saffron, but it's not there! It was moved out to the storage facility and must now be retrieved (this is a hard page fault). No problem -- your neighbor volunteers to go fetch it for you. Unfortunately, the facility is ~2,900 miles away, all the way across the US, so it takes your friend 6 days to retrieve it! This assumes you normally take 7 seconds to retrieve a spice from the pantry; that your data was in main memory (~100 nanoseconds access time), not in the CPU's caches (which'd be maybe 10 nanoseconds); that your swap file is on a fast (say, WD Raptor) spinning-magnets hard drive with 5 millisecond average access time; and that your neighbor drives non-stop at 60 mph to the facility and back. Even worse, your neighbor drives a motorcycle, and so he can only retrieve one spice rack at a time. So, after waiting 6 days for the Saffron to come back, when you next go to the pantry to get some Paprika, it's also "swapped out" and you must wait another 6 days! It's possible that first spice rack also happened to have the Paprika but it's also likely it did not; that depends on your spice locality. Also, with each trip, your neighbor must pick a spice rack to move out to the facility, so that the returned spice rack has a place to go (it is a "swap", after all), so the Paprika could have just been swapped out! Sadly, it might easily be many weeks until you succeed in cooking your dish. Maybe in the olden days, when memory itself was a core of little magnets, swapping cost wasn't so extreme, but these days, as memory access time has improved drastically while hard drive access time hasn't budged, the disparity is now unacceptable. Swapping has become a badly leaking abstraction. When a typical process (say, your e-mail reader) has to "swap back in" after not being used for a while, it can hit 100s of such page faults, before finishing redrawing its window. It's an awful experience, though it has the fun side effect of letting you see, in slow motion, just what precise steps your email reader goes through when redrawing its window. Swapping is especially disastrous with JVM processes. See, the JVM generally won't do a full GC cycle until it has run out of its allowed heap, so most of your heap is likely occupied by not-yet-collected garbage. Since these pages aren't being touched (because they are garbage and thus unreferenced), the OS happily swaps them out. When GC finally runs, you have a ridiculous swap storm, pulling in all these pages only to then discover that they are in fact filled with garbage and should be discarded; this can easily make your GC cycle take many minutes! It'd be better if the JVM could work more closely with the OS so that GC would somehow run on-demand whenever the OS wants to start swapping so that, at least, we never swap out garbage. Until then, make sure you don't set your JVM's heap size too large! Just use an SSD... These days, many machines ship with solid state disks, which are an astounding (though still costly) improvement over spinning magnets; once you've used an SSD you can never go back; it's just one of life's many one-way doors. You might be tempted to declare that this problem is solved, since SSDs are so blazingly fast, right? Indeed, they are orders of magnitudes faster than spinning magnets, but they are still 2-3 orders of magnitude slower than main memory or CPU cache. The typical SSD might have 50 microsends access time, which equates to ~58 total miles of driving at 60 mph. Certainly a huge improvement, but still unacceptable if you want to cook your dish on time! Just add RAM... Another common workaround is to put lots of RAM in your machine, but this can easily back-fire: operating systems will happily swap out memory pages in favor of caching IO pages, so if you have any processes accessing lots of bytes (say, mencoder encoding a 50 GB bluray movie, maybe a virus checker or backup program, or even Lucene searching against a large index or doing a large merge), the OS will swap your pages out. This then means that the more RAM you have, the more swapping you get, and the problem only gets worse! Fortunately, some OS's let you control this behavior: on Linux, you can tune swappiness down to 0 (most Linux distros default this to a highish number); Windows also has a checkbox, under My Computer -> Properties -> Advanced -> Performance Settings -> Advanced -> Memory Usage, that lets you favor Programs or System Cache, that's likely doing something similar. There are low-level IO flags that these programs are supposed to use so that the OS knows not to cache the pages they access, but sometimes the processes fail to use them or cannot use them (for example, they are not yet exposed to Java), and even if they do, sometimes the OS ignores them! When swapping is OK If your computer never runs any interactive processes, ie, a process where a human is blocked (waiting) on the other end for something to happen, and only runs batch processes which tend to be active at different times, then swapping can be an overall win since it allows that process which is active to make nearly-full use of the available RAM. Net/net, over time, this will give greater overall throughput for the batch processes on the machine. But, remember that the server running your web-site is an interactive process; if your server processes (web/app server, database, search server, etc.) are stuck swapping, your site has for all intents and purposes become unusable to your users. This is a fixable problem Most processes have known data structures that consume substantial RAM, and in many cases these processes could easily discard and later regenerate their data structures in much less time than even a single page fault. Caches can simply be pruned or discarded since they will self-regenerate over time. These data structures should never be swapped out, since regeneration is far cheaper. Somehow the OS should ask each RAM-intensive and least-recently-accessed process to discard its data structures to free up RAM, instead of swapping out the pages occupied by the data structure. Of course, this would require a tighter interaction between the OS and processes than exists today; Java's SoftReference is close, except this only works within a single JVM, and does not interact with the OS. What can you do? Until this problem is solved for real, the simplest workaround is to disable swapping entirely, and stuff as much RAM as you can into the machine. RAM is cheap, memory modules are dense, and modern motherboards accept many modules. This is what I do. Of course, with this approach, when you run out of RAM stuff will start failing. If the software is well written, it'll fail gracefully: your browser will tell you it cannot open a new window or visit a new page. If it's poorly written it will simply crash, thus quickly freeing up RAM and hopefully not losing any data or corrupting any files in the process. Linux takes the simple draconian approach of picking a memory hogging process and SIGKILL'ing it. If you don't want to disable swapping you should at least tell the OS not to swap pages out for IO caching. Just say no to swapping!
April 27, 2011
by Michael Mccandless
· 25,752 Views
article thumbnail
Check if PHP is running in safe mode or not
Lately, I have been playing around with PHP configuration. And I realized that the runtime configurations do not change using ini_set function, if you are running PHP in the safe mode. A simple check in PHP can tell you this. Notice the code block below and which allows to check this. Next, time when you are going to change any configuration at runtime, I am sure you want to check this, before you could see that change working. Hope that helps. Stay digified !!
April 26, 2011
by Sachin Khosla
· 14,799 Views
article thumbnail
Turning Off Session Fixation Protection in Tomcat 7
Session fixation protection is a new feature that was introduced as part of the Apache Tomcat 7 release process, and has been backported and turned on by default in all versions from version 6.0.21 on. Session fixation happens when an attacker sends a link to the victim with a session ID included. The minute the victim authenticates the session in the link by logging in, the attacker is able to just click the session link and have full access to the intended victims account. According to Mark Thomas, release manager for Tomcat 7 and member of the Apache Security Committee, in an article on TomcatExpert.com today": Essentially, when a user authenticates their session, Tomcat will change the session ID. It does not destroy the previous session, rather it renames it so it is no longer found by that ID. So in our example above, Bob would try and log on with that session, and he would not be able to find it. Turning Off Session Fixation Protection Since it is turned on by default, the configuration is implicit inside of Tomcat. To turn it off, you will need to explicitly add the configuration and turn it off. First you will need to find the details of the specific type of authentication valve you are using in the Apache Tomcat Valve Configuration Documentation. Then you would need to navigate to the context.xml file for your application (not $CATALINA_BASE/conf/context.xml - that is the global context.xml that provides defaults for all web applications). Add the valve with the appropriate class for your authentication type (e.g., for basic authentication add a valve using class="org.apache.catalina.authenticator.BasicAuthenticator". Finally, set the parameter changeSessionIdOnAuthentication="false". When to Turn Off Session Fixation Protection Generally, this is not recommended. The Apache Security Team deemed this feature a basic enough protection that it should be afforded to all users implicitly. However, in some cases application specific behavior could break and the feature would need to be turned off until the functionality could be restored using the protection. For more information on the feature and when to use it, see the full article from Mark Thomas on TomcatExpert.com.
April 26, 2011
by Stacey Schneider
· 14,829 Views
article thumbnail
Exploring TDD in JavaScript with a small kata
A code kata is an exercise where you focus on your technique instead of on the final product of your mind and fingers. But a kata can also be used as a constant parameter, while other variables change, like in scientific experiments. For example, when learning a new programming language or framework, you can execute an old kata in order to explore it. I decided to perform a small and famous Kata that we used also during interviews to separate programmers from not programmers: the FizzBuzz kata. My goal was to learn how to setup a platform for Test-Driven Development in JavaScript, following the advice of the Test-Driven JavaScript Development book. The parameters that change from my habits are the tools for running tests and the programming language, but my IDE (Unix&Vim) remained fixed along with the Kata: Write a function that returns its numerical argument. But for multiples of three return Fizz instead of the number and for the multiples of five return Buzz. For numbers which are multiples of both three and five return FizzBuzz. Additional requirement: when passed a multiple of 7, return Bang; when passed a multiple of 5 and 7, return BuzzBang; and so on for all the combinations. As my tools for running the tests, I used JsTestDriver and Firefox, as suggested by the book Test-Driven JavaScript Development which I'm currently reading. JsTestDriver JsTestDriver will make you feel the joy of a green bar again. Download its jar, put it somewhere and add an alias in your .bashrc: export JSTESTDRIVER_HOME=~/bin alias jstestdriver="java -jar $JSTESTDRIVER_HOME/JsTestDriver-1.3.2.jar" Start the server: jstestdriver --port 4224 Point an open browser (I used Firefox) to localhost:4224. The browser will ping it via Ajax requests undefinitely to gather tests to run. Now we can use the command line to run tests, like you'll do with PHPUnit if you are a PHPer: jstestdriver --tests all The Kata I started with a simple function, fizzbuzz(), and a single test case. I never wrote a test with JsTestDriver before so I needed to gain some confidence and be sure the configuration file was correct. server: http://localhost:4224 load: - src/*.js - test/*.js In JsTestDriver, a Test case is created by passing to TestCase (global function provided by JsTestDriver) a map containing anonymous functions. TestCase("FizzBuzzTest", { "test should return Fizz when passed 3" : function () { assertEquals("Fizz", fizzbuzz(3)); } }); The functions whose names start with test will be executed; there are some reserved keywords like setUp which are used as hooks for fixture creation. Running the test with the alias command is really simple: jstestdriver --tests all I made the first test pass with fizzbuzz.js, a file containing a first version of the function (with a fake implementation): function fizzbuzz() { return 'Fizz'; } The result? A green bar (metaphorically green; all tests pass.) . Total 1 tests (Passed: 1; Fails: 0; Errors: 0) (0,00 ms) Firefox 4.0 Linux: Run 1 tests (Passed: 1; Fails: 0; Errors 0) (0,00 ms) You can capture more than one browser if you want to run test simultaneously in all of them, but it will probably slow down the TDD basic cycle. You can leave cross-browser testing for later. Going on After this first test, I went on adding new ones and making them pass, until I even converted the function to an object, for the sake of easy configuration (a function returning a function would be the same). Since I also needed to create the object in just one place, I started using setUp for the fixture creation: TestCase("FizzBuzzTest", { setUp : function () { this.fizzbuzz = new FizzBuzz({ 3 : 'Fizz', 5 : 'Buzz', 7 : 'Bang' }); }, "test should return the number when passed 1 or 2" : function () { assertEquals(1, this.fizzbuzz.accept(1)); assertEquals(2, this.fizzbuzz.accept(2)); }, "test should return Fizz when passed 3 or a multiple" : function () { assertEquals("Fizz", this.fizzbuzz.accept(3)); assertEquals("Fizz", this.fizzbuzz.accept(6)); }, "test should return Buzz when passed 5 or a multiple" : function () { assertEquals("Buzz", this.fizzbuzz.accept(5)); assertEquals("Buzz", this.fizzbuzz.accept(10)); }, "test should return FizzBuzz when passed a multiple of both 3 and 5" : function () { assertEquals("FizzBuzz", this.fizzbuzz.accept(15)); assertEquals("FizzBuzz", this.fizzbuzz.accept(30)); }, "test should return Bang when passed a multiple of 7" : function () { assertEquals("Bang", this.fizzbuzz.accept(7)); assertEquals("Bang", this.fizzbuzz.accept(14)); }, "test should return FizzBuzzBang when it is the case" : function () { assertEquals("FizzBuzzBang", this.fizzbuzz.accept(3*5*7)); } }); You can use this to share fixtures between the setUp and the different test methods: the test does not look different from JUnit and PHPUnit ones. Like in all xUnit testing frameworks, the setUp is executed on a brand new object for each test, to preserve isolation. I like a bit the way in which in JavaScript you can tear and put together objects: after all, it's called object-oriented programming, not class-oriented programming. I decided to use a small function constructor as you may infer from the test: function FizzBuzz(correspondences) { this.correspondences = correspondences; this.accept = function (number) { var result = ''; for (var divisor in this.correspondences) { if (number % divisor == 0) { result = result + this.correspondences[divisor]; } } if (result) { return result; } else { return number; } } } All the code is on Github, to see the intermediate steps of the Kata if you need them. You can also use the repository to try out your installation of JsTestDriver: a git pull followed by running the tests will confirm that it's working. Sometimes we don't test code in alien environments like JavaScript console or database queries because we don't know how; but a Kata which takes just two Pomodoros can solve the issue and let you enjoy a green bar even when working with a browser's interpreter.
April 21, 2011
by Giorgio Sironi
· 12,866 Views
article thumbnail
The built-in qualifiers @Default and @Any
• @Default - Whenever a bean or injection point does not explicitly declare a qualifier, the container assumes the qualifier @Default. • @Any – When you need to declare an injection point without specifying a qualifier, is useful to know that all beans have the @Any qualifier. This qualifier “belongs” to all beans and injection points (not applicable when @New is present) and when is explicitly specified you suppress the @Default qualifier. This is useful if you want to iterate over all beans with a certain bean type. The example presented in this post will prove how @Any qualifier works in a useful example. You will try to iterate over all beans with a certain bean type. For start, you define a new Java type, like below – a simple interface representing a Wilson tennis racquet type: package com.racquets; public interface WilsonType { public String getWilsonRacquetType(); } Now, three Wilson racquets will implement this interface – notice that no qualifier was specified: package com.racquets; public class Wilson339Racquet implements WilsonType{ @Override public String getWilsonRacquetType() { return ("Wilson BLX Six.One Tour 90 (339 gr)"); } } package com.racquets; public class Wilson319Racquet implements WilsonType { @Override public String getWilsonRacquetType() { return ("Wilson BLX Six.One Tour 90 (319 gr)"); } } package com.racquets; public class Wilson96Racquet implements WilsonType { @Override public String getWilsonRacquetType() { return ("Wilson BLX Pro Tour 96"); } } Next, you can use the @Any qualifier at injection point to iterate over all beans of type WilsonRacquet. An instance of each implementation is injected and the result of getWilsonRacquetType method is stored in an ArrayList: package com.racquets; import java.util.ArrayList; import javax.enterprise.context.RequestScoped; import javax.enterprise.inject.Any; import javax.enterprise.inject.Instance; import javax.enterprise.inject.Produces; import javax.inject.Inject; import javax.inject.Named; @RequestScoped public class WilsonRacquetBean { private @Named @Produces ArrayList wilsons = new ArrayList(); @Inject void initWilsonRacquets(@Any Instance racquets) { for (WilsonType racquet : racquets) { wilsons.add(racquet.getWilsonRacquetType()); } } } From a JSF page, you can easily iterate over the ArrayList (notice here that the wilsons collection was annotated with @Named (allows JSF to have access to this field) and @Produces (as a simple explanation, a producer field suppress the getter method)): The built-in qualifiers @Default and @Any The output is in figure below: From http://e-blog-java.blogspot.com/2011/04/built-in-qualifiers-default-and-any.html
April 21, 2011
by A. Programmer
· 8,604 Views
article thumbnail
Don't Use JmsTemplate in Spring!
JmsTemplate is easy for simple message sending. What if we want to add headers, intercept or transform the message? Then we have to write more code. So, how do we solve this common task with more configurability in lieu of more code? First, lets review JMS in Spring. Spring JMS Options JmsTemplate – either to send and receive messages inline Use send()/convertAndSend() methods to send messages Use receive()/receiveAndConvert() methods to receive messages. BEWARE: these are blocking methods! If there is no message on the Destination, it will wait until a message is received or times out. MessageListenerContainer – Async JMS message receipt by polling JMS Destinations and directing messages to service methods or MDBs Both JmsTemplate and MessageListenerContainer have been successfully implemented in Spring applications, if we have to do something a little different, we introduce new code. What could possibly go wrong? Future Extensibility? On many projects new use-cases arise, such as: Route messages to different destinations, based on header values or contents? Log the message contents? Add header values? Buffer the messages? Improved response and error handling? Make configuration changes without having to recompile? and more… Now we have to refactor code and introduce new code and test cases, run it through QA, etc. etc. A More Configurable Solution! It is time to graduate Spring JmsTemplate and play with the big kids. We can easily do this with a Spring Integration flow. How it is done with Spring Integration Here we have a diagram illustrating the 3 simple components to Spring Integration replacing the JmsTemplate send. Create a Gateway interface – an interface defining method(s) that accept the type of data you wish to send and any optional header values. Define a Channel – the pipe connecting our endpoints Define an Outbound JMS Adapter – sends the message to your JMS provider (ActiveMQ, RabbitMQ, etc.) Simply inject this into our service classes and invoke the methods. Immediate Gains Add header & header values via the methods defined in the interface Simple invokation of Gateway methods from our service classes Multiple Gateway methods Configure method level or class level destinations Future Gains Change the JMS Adapter (one-way) to a JMS Gateway (two-way) to processes responses from JMS We can change the channel to a queue (buffered) channel We can wire in a transformer for message transformation We can wire in additional destinations, and wire in a “header (key), header value, or content based” router and add another adapter We can wire in other inbound adapters receiving data from another source, such as SMTP, FTP, File, etc. Wiretap the channel to send a copy of the message elsewhere Change the channel to a logging adapter channel which would provide us with logging of the messages coming through Add the “message-history” option to our SI configuration to track the message along its route and more… Optimal JMS Send Solution The Spring Integration Gateway Interface Gateway provides a one or two way communication with Spring Integration. If the method returns void, it is inherently one-way. The interface MyJmsGateway, has one Gateway method declared sendMyMessage(). When this method is invoked by your service class, the first argument will go into a message header field named “myHeaderKey”, the second argument goes into the payload. package com.gordondickens.sijms; import org.springframework.integration.annotation.Gateway;import org.springframework.integration.annotation.Header; public interface MyJmsGateway { @Gateway public void sendMyMessage(@Header("myHeaderKey") String s, Object o);} Spring Integration Configuration Because the interface is proxied at runtime, we need to configure in the Gateway via XML. Sending the Message package com.gordondickens.sijms; import org.junit.Test;import org.junit.runner.RunWith;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.test.context.ContextConfiguration;import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; @ContextConfiguration("classpath:/com/gordondickens/sijms/JmsSenderTests-context.xml")@RunWith(SpringJUnit4ClassRunner.class)public class JmsSenderTests { @Autowired MyJmsGateway myJmsGateway; @Test public void testJmsSend() { myJmsGateway.sendMyMessage("myHeaderValue", "MY PayLoad"); } Summary Simple implementation Invoke a method to send a message to JMS – Very SOA eh? Flexible configuration Reconfigure & restart WITHOUT recompiling – SWEET!
April 21, 2011
by Gordon Dickens
· 84,488 Views
article thumbnail
Converting a Spring SimpleFormController into an @Controller
In my previous post, I showed how to convert a Spring web controller class to use the @Controller annotation. In this post, I aim to show how forms in a Spring MVC application can also be converted to using annotations. Forms in Spring are typically modelled by extending the org.springframework.web.servlet.mvc.SimpleFormController class, but using Spring annotations, they can also be simplified and defined by the @Controller annotation.Without annotations, a SimpleFormController would be defined as below as in both a Java class and as a bean in XML. import org.springframework.web.servlet.mvc.SimpleFormController public class PriceIncreaseFormController extends SimpleFormController { public ModelAndView onSubmit(Object command) { // Submit the form } protected Object formBackingObject(HttpServletRequest request) throws ServletException { // Initialize the form } } To convert the class to use annotations, we need to add an @Controller and @RequestMapping annotation. We can then define a simple POJO controller that does not need to extend Spring's classes. @Controller @RequestMapping("/priceincrease.htm") public class PriceIncreaseFormController { @Autowired private PriceIncreaseValidator priceIncreaseValidator; @RequestMapping(method=RequestMethod.POST) public String onSubmit(@ModelAttribute("priceIncrease")PriceIncrease priceIncrease, BindingResult result) { int increase = priceIncrease.getPercentage(); priceIncreaseValidator.validate(increase, result); if (result.hasErrors()) return "priceIncrease"; // Validator has succeeded. // Perform necessary actions and return to success page. return "redirect:home.htm"; } @RequestMapping(method=RequestMethod.GET) public String initializeForm(ModelMap model) { // Perform and Model / Form initialization Map priority = new LinkedHashMap(); priority.put(1, "Low"); priority.put(2, "Normal"); priority.put(3, "High"); model.addAttribute("priorityList", priority); return "priceincrease"; } // Other getters and setters an needed. } After defining the class like this, there is no need for the class to be defined within the Spring Context XML file. The two methods outlined above in this simple class show how the initializeForm() method is called when an HTTP GET request is made to the form and how the form is submitted in the onSubmit() method via a HTTP POST request. The method called on a GET request is used to initialize the form, whereas the method called on a POST request is called when the form is submitted. The final thing to notice from this class is that the validation has to be explicitly called within the onSubmit() method. In this example, the PriceValidator is injected into the class via the @Autowired annotation. From http://www.davidsalter.co.uk/1/post/2011/04/converting-a-spring-simpleformcontroller-into-an-controller.html
April 20, 2011
by David Salter
· 59,765 Views · 8 Likes
  • Previous
  • ...
  • 814
  • 815
  • 816
  • 817
  • 818
  • 819
  • 820
  • 821
  • 822
  • 823
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: