DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Oracle Weblogic Stuck Thread Detection
The following question will again test your knowledge of the Oracle Weblogic threading model. I’m looking forward for your comments and experience on the same. If you are a Weblogic administrator, I’m certain that you heard of this common problem: stuck threads. This is one of the most common problems you will face when supporting a Weblogic production environment. A Weblogic stuck thread simply means a thread performing the same request for a very long time and more than the configurable Stuck Thread Max Time. Question: How can you detect the presence of STUCK threads during and following a production incident? Answer: As we saw from our last article “Weblogic Thread Monitoring Tips”, Weblogic provides functionalities allowing us to closely monitor its internal self-tuning thread pool. It will also highlight you the presence of any stuck thread. This monitoring view is very useful when you do a live analysis but what about after a production incident? The good news is that Oracle Weblogic will also log any detected stuck thread to the server log. Such information includes details on the request and more importantly, the thread stack trace. This data is crucial and will allow you to potentially better understand the root cause of any slowdown condition that occurred at a certain time. < ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'> <[STUCK] ExecuteThread: '35' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "608" seconds working on the request "Workmanager: default, Version: 0, Scheduled=true, Started=true, Started time: 608213 ms POST /App1/jsp/test.jsp HTTP/1.1 Accept: application/x-ms-application... Referer: http://.. Accept-Language: en-US User-Agent: Mozilla/4.0 .. Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Content-Length: 539 Connection: Keep-Alive Cache-Control: no-cache Cookie: JSESSIONID= ]", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace: ................................... javax.servlet.http.HttpServlet.service(HttpServlet.java:727) javax.servlet.http.HttpServlet.service(HttpServlet.java:820) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:301) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:184) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.... weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run() weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2281) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2180) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1491) weblogic.work.ExecuteThread.execute(ExecuteThread.java:256) weblogic.work.ExecuteThread.run(ExecuteThread.java:221) Here is one more tip: the generation and analysis of a JVM thread dump will also highlight you stuck threads. As we can see from the snapshot below, the Weblogic thread state is now updated to STUCK, which means that this particular request is being executed since at least 600 seconds or 10 minutes. This is very useful information since the native thread state will typically remain to RUNNABLE. The native thread state will only get updated when dealing with BLOCKED threads etc. You have to keep in mind that RUNNABLE simply means that this thread is healthy from a JVM perspective. However, it does not mean that it truly is from a middleware or Java EE container perspective. This is why Oracle Weblogic has its own internal ExecuteThread state. Finally, if your organization or client is using any commercial monitoring tool, I recommend that you enable some alerting around both hogging thread and stuck thread. This will allow your support team to take some pro-active actions before the affected Weblogic managed server(s) become fully unresponsive.
October 9, 2013
by Pierre - Hugues Charbonneau
· 54,468 Views
article thumbnail
Code Coverage of Jasmine Tests using Istanbul and Karma
for modern web application development, having dozens of unit tests is not enough anymore. the actual code coverage of those tests would reveal if the application is thoroughly stressed or not. for tests written using the famous jasmine test library, an easy way to have the coverage report is via istanbul and karma . for this example, let’s assume that we have a simple library sqrt.js which contains an alternative implementation of math.sqrt . note also how it will throw an exception instead of returning nan for an invalid input. var my = { sqrt: function(x) { if (x < 0) throw new error("sqrt can't work on negative number"); return math.exp(math.log(x)/2); } }; using jasmine placed under test/lib/jasmine-1.3.1 , we can craft a test runner that includes the following spec: describe("sqrt", function() { it("should compute the square root of 4 as 2", function() { expect(my.sqrt(4)).toequal(2); }); }); opening the spec runner in a web browser will give the expected outcome: so far so good. now let's see how the code coverage of our test setup can be measured. the first order of business is to install karma . if you are not familiar with karma, it is basically a test runner which can launch and connect to a specific set of web browsers, run your tests, and then gather the report. using node.js, what we need to do is: npm install karma karma-coverage before launching karma, we need to specify its configuration . it could be as simple as the following my.conf.js (most entries are self-explained). note that the tests are executed using phantomjs for simplicity, it is however quite trivial to add other web browsers such as chrome and firefox. module.exports = function(config) { config.set({ basepath: '', frameworks: ['jasmine'], files: [ '*.js', 'test/spec/*.js' ], browsers: ['phantomjs'], singlerun: true, reporters: ['progress', 'coverage'], preprocessors: { '*.js': ['coverage'] } }); }; running the tests, as well as performing code coverage at the same time, can be triggered via: node_modules/.bin/karma start my.conf.js which will dump the output like: info [karma]: karma v0.10.2 server started at http://localhost:9876/ info [launcher]: starting browser phantomjs info [phantomjs 1.9.2 (linux)]: connected on socket n9ndnhj0np92ntspgx-x phantomjs 1.9.2 (linux): executed 1 of 1 success (0.029 secs / 0.003 secs) as expected (from the previous manual invocation of the spec runner), the test passed just fine. however, the most particular interesting piece here is the code coverage report, it is stored (in the default location) under the subdirectory coverage . open the report in your favorite browser and there you'll find the coverage analysis report. behind the scene, karma is using istanbul , a comprehensive javascript code coverage tool (read also my previous blog post on javascript code coverage with istanbul ). istanbul parses the source file, in this example sqrt.js , using esprima and then adds some extra instrumentation which will be used to gather the execution statistics. the above report that you see is one of the possible outputs, istanbul can also generate lcov report which is suitable for many continuous integration systems (jenkins, teamcity, etc). an extensive analysis of the coverage data should also prevent any future coverage regression, check out my other post hard thresholds on javascript code coverage . one important thing about code coverage is branch coverage . if you pay attention carefully, our test above is still not exercising the situation where the input to my.sqrt is negative. there is a big "i" marking in the third-line of the code, this is istanbul telling us that the if branch is not taken at all (for the else branch, it will be an "e" marker). once this missing branch is noticed, improving the situation is as easy as adding one more test to the spec: it("should throw an exception if given a negative number", function() { expect(function(){ my.sqrt(-1); }). tothrow(new error("sqrt can't work on negative number")); }); once the test is executed again, the code coverage report looks way better and everyone is happy. if you have some difficulties following the above step-by-step instructions, take a look at a git repository i have prepared: github.com/ariya/coverage-jasmine-istanbul-karma . feel free to play with it and customize it to suit your workflow!
October 8, 2013
by Ariya Hidayat
· 48,869 Views
article thumbnail
Add REST to Standalone Java with Jetty and Spring WebMVC
I’m going to start by discussing the Spring WebMVC configuration and move on from there in future posts.
October 7, 2013
by Alan Hohn
· 36,187 Views · 1 Like
article thumbnail
Hibernate Search based Autocomplete Suggester
In this article, I will show how to implement auto-completion using Hibernate Search. The same can be achieved using Solr or ElasticSearch. But I decided to use Hibernate Search as its the simplest to get started with, easily integrates with an existing application and leverages the same core - Lucene. And we get all of this without the overhead of managing Solr/ElasticSearch cluster. In all, I found Hibernate Search to be the go-to search engine for simple use cases. For our use case, we build a product title based auto-completion where often, the user queries are searches for product title. While typing, users should immediately see titles matching their requests, and Hibernate Search should do the hard work to filter the relevant documents in near real-time. Lets have the following JPA annotated Product entity class. public class Product { @Id @Column(name = "sku") private String sku; @Column(name = "upc") private String upc; @Column(name = "title") private String title; .... } We are interested in returning suggestions based on the 'title' field. Title will be indexed based on 2 strategies - N-Gram and Edge N-Gram. Edge N-Gram - This will match only from the left edge of the suggestion text. For this we use KeywordTokenizerFactory (emits the entire input as a single token) and EdgeNGramFilterFactory along with some regex cleansing. N-Gram matches from the start of every word, so that you can get right-truncated suggestions for any word in the text, not only from the first word. The main difference from N-gram is the tokenizer which is StandardTokenizerFactory along with NGramFilterFactory. Using these strategies, if the document field is "A brown fox" and the query is a) "A bro"- Will match b) "bro" - Will match Implementation: In the entity defined above, we can map 'title' property twice with the above strategies. Below are the annotations to instruct Hibernate to index 'title' twice. @Entity @Table(name = "item_master") @Indexed(index = "Products") @AnalyzerDefs({ @AnalyzerDef(name = "autocompleteEdgeAnalyzer", // Split input into tokens according to tokenizer tokenizer = @TokenizerDef(factory = KeywordTokenizerFactory.class), filters = { // Normalize token text to lowercase, as the user is unlikely to // care about casing when searching for matches @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = { @Parameter(name = "pattern",value = "([^a-zA-Z0-9\\.])"), @Parameter(name = "replacement", value = " "), @Parameter(name = "replace", value = "all") }), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = StopFilterFactory.class), // Index partial words starting at the front, so we can provide // Autocomplete functionality @TokenFilterDef(factory = EdgeNGramFilterFactory.class, params = { @Parameter(name = "minGramSize", value = "3"), @Parameter(name = "maxGramSize", value = "50") }) }), @AnalyzerDef(name = "autocompleteNGramAnalyzer", // Split input into tokens according to tokenizer tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { // Normalize token text to lowercase, as the user is unlikely to // care about casing when searching for matches @TokenFilterDef(factory = WordDelimiterFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = NGramFilterFactory.class, params = { @Parameter(name = "minGramSize", value = "3"), @Parameter(name = "maxGramSize", value = "5") }), @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = { @Parameter(name = "pattern",value = "([^a-zA-Z0-9\\.])"), @Parameter(name = "replacement", value = " "), @Parameter(name = "replace", value = "all") }) }), @AnalyzerDef(name = "standardAnalyzer", // Split input into tokens according to tokenizer tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { // Normalize token text to lowercase, as the user is unlikely to // care about casing when searching for matches @TokenFilterDef(factory = WordDelimiterFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = { @Parameter(name = "pattern", value = "([^a-zA-Z0-9\\.])"), @Parameter(name = "replacement", value = " "), @Parameter(name = "replace", value = "all") }) }) // Def }) public class Product { .... } Explanation: 2 custom analyzers - autocompleteEdgeAnalyzer andautocompleteNGramAnalyzer have been defined as per theory in the previous section. Next, we apply these analyzers on the 'title' field to create 2 different indexes. Here is how we do it: @Column(name = "title") @Fields({ @Field(name = "title", index = Index.YES, store = Store.YES, analyze = Analyze.YES, analyzer = @Analyzer(definition = "standardAnalyzer")), @Field(name = "edgeNGramTitle", index = Index.YES, store = Store.NO, analyze = Analyze.YES, analyzer = @Analyzer(definition = "autocompleteEdgeAnalyzer")), @Field(name = "nGramTitle", index = Index.YES, store = Store.NO, analyze = Analyze.YES, analyzer = @Analyzer(definition = "autocompleteNGramAnalyzer")) }) private String title; Start indexing: public void index() throws InterruptedException { getFullTextSession().createIndexer().startAndWait(); } Once indexed, inspect the index using Luke and you should be able to see title analyzed and stored as N-Grams and Edge N-Grams. Search Query: private static final String TITLE_EDGE_NGRAM_INDEX = "edgeNGramTitle"; private static final String TITLE_NGRAM_INDEX = "nGramTitle"; @Transactional(readOnly = true) public synchronized List getSuggestions(final String searchTerm) { QueryBuilder titleQB = getFullTextSession().getSearchFactory() .buildQueryBuilder().forEntity(Product.class).get(); Query query = titleQB.phrase().withSlop(2).onField(TITLE_NGRAM_INDEX) .andField(TITLE_EDGE_NGRAM_INDEX).boostedTo(5) .sentence(searchTerm.toLowerCase()).createQuery(); FullTextQuery fullTextQuery = getFullTextSession().createFullTextQuery( query, Product.class); fullTextQuery.setMaxResults(20); @SuppressWarnings("unchecked") List results = fullTextQuery.list(); return results; } And we have a working suggester. What next? Expose the functionality via a REST API and integrate it with jQuery, examples of which can be easily found. You can also use the same strategy with Solr and ElasticSearch.
October 7, 2013
by Nishant Chandra
· 15,705 Views · 1 Like
article thumbnail
Introduction to Android Studio
Feeling good to be back at the blog . Actually, I have been managing GDG Ahmedabad, delivering android talks, and managing workshops locally and outside my region. Last month, I was quite busy in organizing the “DevFest” event for GDG Ahmedabad, and then for the preparation of my two talks for the GDG Kathmandu DevFest. I was invited to deliver two talks at DevFest, which was organized by GDG Kathmandu. I have already published slides on my Speakerdeck. I am not sure whether you have already checked and learned from my speaker deck, but still give me a chance to write about Introduction to Android studio here. What is Android Studio? It’s an Android focused IDE, designed specially for Android development. It was launched on 16th May 2013, during Google's I/O 2013 event. Android studio contains all the Android SDK tools to design, test, debug and profile your app. By looking at the development tools and environment, we can see its similar to Eclipse with the ADT plug-in, but as I have mentioned above, it's an Android focused IDE, and there are many cool features available in Android Studio that can foster and increase your development productivity. One great thing is that it depends on the IntelliJ Idea IDE, which has proved itself to be a great IDE and has been in use by many Android engineers. What is the Difference Between IntelliJ Idea and Android Studio? Nothing, in regards to Android. If you use IntelliJ… Keep using it IntelliJ 13 will have the same stuff EAP of IntelliJ Idea 13 includes all the new stuff If Not… Give Android Studio a try You may have some questions in mind regarding IntelliJ and Android Studio. If so, check the FAQ section: IntelliJ IDEA and Android Studio FAQ. Let’s Download Android Studio You can download Android Studio from the android developer site: http://developer.android.com/sdk/installing/studio.html. Cool Features of Android Studio As I have mentioned, it's similar to Eclipse with the ADT plug-in, but Android Studio has many cool features that can help you to increase development productivity. Here are the cool features: Powerful code editing (smart editing, code re-factoring) Rich layout Editor (As soon as you drag and drop views on the layout, it shows you a preview in all the screens including Nexus 4, Nexus 7, Nexus 10 and many other resolutions. Layout designing can be done much faster way as compared to eclipse.) Gradle-based build support Maven Support Template-based wizards Lint tool analysis (The Android lint tool is a static code analysis tool that checks your Android project source files for potential bugs and optimization improvements for correctness, security, performance, usability, accessibility, and internationalization). You can experience all the cool features by using Android Studio yourself Awesome Stuff Inside Darcula Theme It's actually a black-based theme. While using Android Studio, I enjoy working in Darcula theme environment. By the way, Its Darcula theme, not Dracula. I am correcting this just because I have seen many people on Stackoverflow and Google+ saying Dracula. You can set the Darcula theme in Android Studio by: File > Settings > IDE Settings > Appearance > Theme: Darcula. Preview All the Screens We can consider this is as part of the Rich layout editor feature. With this privilege, users can design layouts and can check layouts by previewing in all the possible screens, such as Nexus 4, Nexus 7, Nexus and many other devices. It helps the user to improve layout designs while providing compatibility to various resolutions available. Device Framed Screen Capture It provides ability to directly generate a screenshot of your application. Yes, it was already included in the SDK, but Android Studio provides something more: Device frame (As frames for many Nexus devices are available, you can capture screenshot in whichever frame you like most) Drop shadow Screen glare Color Preview I like this feature very much and I have found this feature helpful while working on big projects. While using Eclipse, we have to have 3rd party color chooser and picker but this feature gives privilege to select color from in-build color chooser and can also have preview in Colors.xml file. Color Preview – Activity class While using Eclipse, it’s difficult to check which color we have used. Yes, we can imagine the color by its name, but an actual preview is much better. This feature was recently introduced in Android Studio, so you must have latest version installed. Hard Coded Strings Here is another feature I like and have found useful: Whenever you use any string resources from Strings.xml, it displays actual value instead of variable name. This setting comes by default, but in case you aren’t able to get hard coded strings in your activity class, then try any of the below ways. Settings > Editor > Code Folding > Android String References OR Select String and right click on it and then go to Folding > Collapse OR CTRL + Numpad ‘-’ Create Layout Variation This provides the ability to create layout variation directly. For example: layout for the large screen, layout for Xlarge screen, etc. The great thing is that the created variant layout gets stored in particular folders like layout-xlarge, layout-large-land, etc. Should I Use Android Studio? You might have explored all the cool features, or you are ready to explore right now. But questions might have arisen in your mind: “Should I use Android Studio,” or “should we start using Android Studio right now,” or “should I continue with IntelliJ or Eclipse?” My answer is a big NO to use Android Studio as your main IDE for Android development, because currently its EARLY ACCESS PREVIEW and it's maturing over days. Engineers have been working hard to improve this IDE. So, you should wait until the BETA comes out. I agree with Carlos Vega (commented over G+) on this point: “You should at least migrate to Intellij Idea 12 so that you get familiar with the IDE’s workflow and keyboard shortcuts. That way when Android Studio reach a more stable level, you can switch without a major learning curve.” Thanks, Carlos Vega, for the input. By the way, here is the presentation I delivered at the GDG Kathmandu DevFest.
October 7, 2013
by Paresh Mayani
· 26,352 Views
article thumbnail
Clojure: Stripping all the Whitespace
When putting together data sets to play around with, one of the more boring tasks is stripping out characters that you’re not interested in and more often than not those characters are white spaces. Since I’ve been building data sets using Clojure I wanted to write a function that would do this for me. I started out with the following string: (def word " with a little bit of space we can make it through the night ") which I wanted to format in such a way that there would be a maximum of one space between each word. I start out by using the trim function but that only removes white space from the beginning and end of a string: > (clojure.string/trim word) "with a little bit of space we can make it through the night" I wanted to get rid of the space in between ‘a’ and ‘little’ as well so I wrote the following code to split on a space and filter out any excess spaces that still remained before joining the words back together: > (clojure.string/join " " (filter #(not (clojure.string/blank? %)) (clojure.string/split word #" "))) "with a little bit of space we can make it through the night" I wanted to try and make it a bit easier to read by using the thread last (->>) macro but that didn’t work as well as I’d hoped because clojure.string/split doesn’t take the string in as its last parameter: > (->> (clojure.string/split word #" ") (filter #(not (clojure.string/blank? %))) (clojure.string/join " ")) "with a little bit of space we can make it through the night" I worked around it by creating a specific function for splitting on a space: (defn split-on-space [word] (clojure.string/split word #"\s")) which means we can now chain everything together nicely: > (->> word split-on-space (filter #(not (clojure.string/blank? %))) (clojure.string/join " ")) "with a little bit of space we can make it through the night" I couldn’t find a cleaner way to do this but I’m sure there is one and my googling just isn’t up to scratch so do let me know in the comments!
October 3, 2013
by Mark Needham
· 4,063 Views
article thumbnail
TestNG @Test Annotation and DataProviderClass Example
In the previous post, we have seen an example where dataProvider attribute has been used3 to test methods with different sets of input data for the same test method. TestNG provides another attribute dataProviderClass in conjunction with dataProvider to fetch the input data for the test methods from an external class. The actual class that holds input data is set to the dataProviderClass attribute and datProvider by itself holds the method name where the input data is actually fetched. Here is a quick example to show how to use dataProviderClass and dataProvide attribute Code Service Class ? view source print? 01.package com.skilledmonster.example; 02./** 03.* Simple calculator service to demonstrate TestNG Framework 04.* 05.* @author Jagadeesh Motamarri 06.* @version 1.0 07.*/ 08.public interface CalculatorService { 09.int sum(int a, int b); 10.int multiply(int a, int b); 11.int div(int a, int b); 12.int sub(int a, int b); 13.} Service Implementation Class ? view source print? 01.package com.skilledmonster.example; 02./** 03.* Simple calculator service implementation to demonstrate TestNG Framework 04.* 05.* @author Jagadeesh Motamarri 06.* @version 1.0 07.*/ 08.public class SimpleCalculator implements CalculatorService { 09.public int sum(int a, int b) { 10.return a + b; 11.} 12.public int multiply(int a, int b) { 13.return a * b; 14.} 15.public int div(int a, int b) { 16.return a / b; 17.} 18.public int sub(int a, int b) { 19.return a - b; 20.} 21.} Data Provider Class ? view source print? 01.package com.skilledmonster.common; 02.import org.testng.annotations.DataProvider; 03./** 04.* Data Provider class for TestNG test cases 05.* 06.* @author Jagadeesh Motamarri 07.* @version 1.0 08.*/ 09.public class TestNGDataProvider { 10./** 11.* Data Provider for testing sum of 2 numbers 12.* 13.* @return 14.*/ 15.@DataProvider 16.public static Object[][] testSumInput() { 17.return new Object[][] { { 5, 5 }, { 10, 10 }, { 20, 20 } }; 18.} 19./** 20.* Data Provider for testing multiplication of 2 numbers 21.* 22.* @return 23.*/ 24.@DataProvider 25.public static Object[][] testMultipleInput() { 26.return new Object[][] { { 5, 5 }, { 10, 10 }, { 20, 20 } }; 27.} 28.} Finally, test class that uses dataProviderClass attribute to feed the input data for the test methods ? package com.skilledmonster.example; import org.testng.Assert; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; import com.skilledmonster.common.TestNGDataProvider; /** * Example to demonstrate use of dataProviderClass and dataProvide attributes of TestNG framework * * @author Jagadeesh Motamarri * @version 1.0 */ public class TestNGAnnotationTestDataProviderExample { public CalculatorService service; @BeforeClass public void init() { System.out.println("@BeforeClass: The annotated method will be run before the first test method in the current class is invoked."); System.out.println("init service"); service = new SimpleCalculator(); } @Test(dataProviderClass = TestNGDataProvider.class, dataProvider = "testSumInput") public void testSum(int a, int b) { System.out.println("@Test : testSum()"); int result = service.sum(a, b); Assert.assertEquals(result, a + b); } @Test(dataProviderClass = TestNGDataProvider.class, dataProvider = "testMultipleInput") public void testMultiple(int a, int b) { System.out.println("@Test : testMultiple()"); int result = service.multiply(a, b); Assert.assertEquals(result, a * b); } } Output As shown in the above console output, each of the testSum() and testMutiple() methods are invoked with different sets of input data using an external class with dataProviderClass attribute. Advantage More flexibility and re-usability of commonly used data across several test classes. Download Download TestNG DataProvider Example
October 2, 2013
by Jagadeesh Motamarri
· 25,183 Views
article thumbnail
Free Offline HTML WYSIWYG Editors
Here are some free HTML WYSIWYG editors based on the Mozilla Gecko Engine.
October 2, 2013
by Kosta Stojanovski
· 33,313 Views · 2 Likes
article thumbnail
Clojure: Converting a string to a date
I wanted to do some date manipulation in Clojure recently and figured that since clj-time is a wrapper around Joda Time it’d probably do the trick. The first thing we need to do is add the dependency to our project file and then run lein reps to pull down the appropriate JARs. The project file should look something like this: project.clj (defproject ranking-algorithms "0.1.0-SNAPSHOT" :license {:name "Eclipse Public License" :url "http://www.eclipse.org/legal/epl-v10.html"} :dependencies [[org.clojure/clojure "1.4.0"] [clj-time "0.6.0"]]) Now let’s load the clj-time.format namespace into the REPL since we know we’ll be parsing dates: > (require '(clj-time [format :as f])) The string that I want to convert into a date looks like this: (def string-date "18 September 2012") The first thing we should do is check whether there is an existing formatter that we can use by evaluating the following function: > (f/show-formatters) ... :hour-minute 06:45 :hour-minute-second 06:45:22 :hour-minute-second-fraction 06:45:22.473 :hour-minute-second-ms 06:45:22.473 :mysql 2013-09-20 06:45:22 :ordinal-date 2013-263 :ordinal-date-time 2013-263T06:45:22.473Z :ordinal-date-time-no-ms 2013-263T06:45:22Z :rfc822 Fri, 20 Sep 2013 06:45:22 +0000 ... There are a lot of different built in formatters but unfortunately I couldn’t find one that exactly matched our date format so we’ll have to write our own one. For that we’ll need to refresh our knowledge of Java date formatting: We end up with the following formatter: > (f/parse (f/formatter "dd MMM YYYY") string-date) # It took me much longer than it should have to remember that ‘MMM’ is the pattern to match a short form of a month but it’s just the same as what we’d have to do in Java but with some neat wrapper functions.
October 2, 2013
by Mark Needham
· 5,545 Views
article thumbnail
Introducing the NPM Maven Plugin
This post comes from Alberto Pose at the MuleSoft blog. Introduction Suppose that you have a Maven project and you want to download Node.js modules previously uploaded to NPM. One way of doing that without running node is by using the npm-maven-plugin. It allows the user to download the required Node modules without running node.js: It is completely implemented on the JVM. Getting Started First of all, you will need to add the Mule Maven repo to your pom.xml file: mulesoft-releases MuleSoft Repository https://repository.mulesoft.org/releases/ After doing that, you will need to add the following to the build->plugin section of your pom.xml file: org.mule.tools.javascript npm-maven-plugin 1.0 generate-sources fetch-modules colors:0.5.1 jshint:0.8.1 Then just execute: mvn generate-sources and that’s it! One more thing… By default, the modules can be found in src/main/resources/META-INF but that path can be changed setting the ‘outputDirectory’ parameter. Also, module transitivity is taken into account. That means that it will download all the required dependencies before downloading the specified module. Show me the code! The source code can be found here. Feel free to fork the project and propose changes to it. Happy (Node.js and Maven) hacking!
October 1, 2013
by Ross Mason
· 18,521 Views · 1 Like
article thumbnail
Sparse and Memory-mapped Files
One of the problems with memory-mapped files is that you can’t actually map beyond the end of the file. So you can’t use that to extend your file. I had a thought about and set out to check out what happens when I create a sparse file, a file that only take space when you write to it, and at the same time, map it. As it turns out, this actually works pretty well in practice. You can do so without any issues. Here is how it works: using (var f = File.Create(path)) { int bytesReturned = 0; var nativeOverlapped = new NativeOverlapped(); if (!NativeMethod.DeviceIoControl(f.SafeFileHandle, EIoControlCode.FsctlSetSparse, IntPtr.Zero, 0, IntPtr.Zero, 0, ref bytesReturned, ref nativeOverlapped)) { throw new Win32Exception(); } f.SetLength(1024*1024*1024*64L); } This creates a sparse file that is 64 GB in size. Then we can map it normally: using (var mmf = MemoryMappedFile.CreateFromFile(path)) using (var memoryMappedViewAccessor = mmf.CreateViewAccessor(0, 1024*1024*1024*64L)) { for (long i = 0; i < memoryMappedViewAccessor.Capacity; i += buffer.Length) { memoryMappedViewAccessor.WriteArray(i, buffer, 0, buffer.Length); } } And then we can do stuff to it. And that includes writing to yet-unallocated parts of the file. This also means that you don’t have to worry about writing past the end of the file, the OS will take care of all of that for you. Happy happy, joy joy, etc. There is one problem with this method, however. It means that you have a 64 GB file, but you don’t have that much allocated. What that means in turn is that you might not have that much space available for the file. Which brings up an interesting question, what happens when you are trying to commit a new page, and the disk is out of space? Using file I/O you would get an I/O error with the right code. But when using memory mapped files, the error would actually turn up during access, which can happen pretty much anywhere. It also means that it is a Standard Exception Handling error in Windows, which requires special treatment. To test this out, I wrote the following so it would write to a disk that had only about 50 GB free. I wanted to know what would happen when it ran out of space. That is actually something that happens, and we need to be able to address this issue robustly. The kicker is that this might actually happen at any time, so that would really result is some… interesting behavior with regards to robustness. In other words, I don’t think that this is a viable option, it is a really cool trick, but I don’t think it is a very well thought out option. By the way, the result of my experiment was that we had an effectively a frozen process. No errors, nothing, just a hung. Also, I am pretty sure that WriteArray() is really slow, but I’ll check this out at another pointer in time.
October 1, 2013
by Oren Eini
· 7,905 Views
article thumbnail
Getting Started with NHibernate and ASP.NET MVC- CRUD Operations
In this post we are going to learn how we can use NHibernate in ASP.NET MVC application. What is NHibernate: ORMs(Object Relational Mapper) are quite popular this days. ORM is a mechanism to map database entities to Class entity objects without writing a code for fetching data and write some SQL queries. It automatically generates SQL Query for us and fetch data behalf on us. NHibernate is also a kind of Object Relational Mapper which is a port of popular Java ORM Hibernate. It provides a framework for mapping an domain model classes to a traditional relational databases. Its give us freedom of writing repetitive ADO.NET code as this will be act as our database layer. Let’s get started with NHibernate. How to download: There are two ways you can download this ORM. From nuget package and from the source forge site. Nuget - http://www.nuget.org/packages/NHibernate/ Source Forge-http://sourceforge.net/projects/nhibernate/ Creating a table for CRUD: I am going to use SQL Server 2012 express edition as a database. Following is a table with four fields Id, First Name, Last name, Designation. Creating ASP.NET MVC project for NHibernate: Let’s create a ASP.NET MVC project for NHibernate via click on File-> New Project –> ASP.NET MVC 4 web application. Installing NuGet package for NHibernate: I have installed nuget package from Package Manager console via following Command. It will install like following. NHibertnate configuration file: Nhibernate needs one configuration file for setting database connection and other details. You need to create a file with ‘hibernate.cfg.xml’ in model Nhibernate folder of your application with following details. NHibernate.Connection.DriverConnectionProvider NHibernate.Driver.SqlClientDriver Server=(local);database=LocalDatabase;Integrated Security=SSPI; NHibernate.Dialect.MsSql2012Dialect Here you have got different settings for NHibernate. You need to selected driver class, connection provider as per your database. If you are using other databases like Orcle or MySQL you will have different configuration. ThisNHibernate ORM can work with any databases. Creating a model class for NHibernate: Now it’s time to create model class for our CRUD operations. Following is a code for that. Property name is identical to database table columns. namespace NhibernateMVC.Models { public class Employee { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Designation { get; set; } } } Creating a mapping file between class and table: Now we need a xml mapping file between class and model with name “Employee.hbm.xml” like following in Nhibernate folder. Creating a class to open session for NHibernate I have created a class in models folder called NHIbernateSession and a static function it to open a session for NHibertnate. using System.Web; using NHibernate; using NHibernate.Cfg; namespace NhibernateMVC.Models { public class NHibertnateSession { public static ISession OpenSession() { var configuration = new Configuration(); var configurationPath = HttpContext.Current.Server.MapPath(@"~\Models\Nhibernate\hibernate.cfg.xml"); configuration.Configure(configurationPath); var employeeConfigurationFile = HttpContext.Current.Server.MapPath(@"~\Models\Nhibernate\Employee.hbm.xml"); configuration.AddFile(employeeConfigurationFile); ISessionFactory sessionFactory = configuration.BuildSessionFactory(); return sessionFactory.OpenSession(); } } } Listing: Now we have our open session method ready its time to write controller code to fetch data from the database. Following is a code for that. using System; using System.Web.Mvc; using NHibernate; using NHibernate.Linq; using System.Linq; using NhibernateMVC.Models; namespace NhibernateMVC.Controllers { public class EmployeeController : Controller { public ActionResult Index() { using (ISession session = NHibertnateSession.OpenSession()) { var employees = session.Query().ToList(); return View(employees); } } } } Here you can see I have get a session via OpenSession method and then I have queried database for fetching employee database. Let’s create a new for this you can create this via right lick on view on above method.We are going to create a strongly typed view for this. Our listing screen is ready once you run project it will fetch data as following. Create/Add: Now its time to write add employee code. Following is a code I have written for that. Here I have used session.save method to save new employee. First method is for returning a blank view and another method with HttpPost attribute will save the data into the database. public ActionResult Create() { return View(); } [HttpPost] public ActionResult Create(Employee emplolyee) { try { using (ISession session = NHibertnateSession.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(emplolyee); transaction.Commit(); } } return RedirectToAction("Index"); } catch(Exception exception) { return View(); } } Now let’s create a create view strongly typed view via right clicking on view and add view. Once you run this application and click on create new it will load following screen. Edit/Update: Now let’s create a edit functionality with NHibernate and ASP.NET MVC. For that I have written two action result method once for loading edit view and another for save data. Following is a code for that. public ActionResult Edit(int id) { using (ISession session = NHibertnateSession.OpenSession()) { var employee = session.Get(id); return View(employee); } } [HttpPost] public ActionResult Edit(int id, Employee employee) { try { using (ISession session = NHibertnateSession.OpenSession()) { var employeetoUpdate = session.Get(id); employeetoUpdate.Designation = employee.Designation; employeetoUpdate.FirstName = employee.FirstName; employeetoUpdate.LastName = employee.LastName; using (ITransaction transaction = session.BeginTransaction()) { session.Save(employeetoUpdate); transaction.Commit(); } } return RedirectToAction("Index"); } catch { return View(); } } Here in first action result I have fetched existing employee via get method of NHibernate session and in second I have fetched and changed the current employee with update details. You can create view for this via right click –>add view like below. I have created a strongly typed view for edit. Once you run code it will look like following. Details: Now it’s time to create a detail view where user can see the employee detail. I have written following logic for details view. public ActionResult Details(int id) { using (ISession session = NHibertnateSession.OpenSession()) { var employee = session.Get(id); return View(employee); } } You can add view like following via right click on actionresult view. now once you run this in browser it will look like following. Delete: Now its time to write delete functionality code. Following code I have written for that. public ActionResult Delete(int id) { using (ISession session = NHibertnateSession.OpenSession()) { var employee = session.Get(id); return View(employee); } } [HttpPost] public ActionResult Delete(int id, Employee employee) { try { using (ISession session = NHibertnateSession.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Delete(employee); transaction.Commit(); } } return RedirectToAction("Index"); } catch(Exception exception) { return View(); } } Here in the above first action result will have the delete confirmation view and another will perform actual delete operation with session delete method. When you run into the browser it will look like following. That’s it. It’s very easy to have crud operation with NHibernate. Stay tuned for more.
October 1, 2013
by Jalpesh Vadgama
· 46,968 Views
article thumbnail
Installing NetCDF and R 'ncdf'
If you work with large, gridded datasets, you should probably be using NetCDF, the Network Common Data Form from Unidata: NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. Lots of high-end analysis software can be made to support NetCDF, and it is indispensable for working with gridded datasets that weigh in at tens of gigabytes or more. This brief post describes the easiest way to install the NetCDF libraries and the R ‘ncdf’ package on our favorite systems: CentOS, Ubuntu and Mac OSX. CentOS 6.x CentOS is the operating system of choice if you want a free, robust, open-source server to host your scientific analysis. It is basically an unbranded clone of Red Hat Enterprise Linux. The following instructions worked on CentOS 6.2. Installing System Libraries First, go to http://fedoraproject.org/wiki/EPEL and check for the latest version of the Extended Packages for Enterprise Linux (which contains NetCDF, HDF and many other useful packages). The latest version should be specified on this page. To download a local copy of EPEL and install NetCDF from it, just execute the following commands: sudo wget http://mirror.metrocast.net/fedora/epel/6/i386/epel-release-6-8.noarch.rpm sudo rpm -Uvh epel-release-6-8.noarch.rpm sudo yum --assumeyes install netcdf sudo yum --assumeyes install netcdf-devel Note that adding EPEL as a package archive in the second line does not automatically install all the packages in EPEL. We had to manually install netcdf and netcdf-dev. A list of other available packages is given at the EPEL wiki given above. Installing R Package ‘ncdf’ With the libraries in place, we can now install the ncdf package for our favorite statistical package — R. sudo wget http://cran.r-project.org/src/contrib/ncdf_1.6.6.tar.gz sudo R CMD INSTALL --configure-args="--with-netcdf-include=/usr/include --with-netcdf-lib=/usr/lib" ncdf_1.6.6.tar.gz Ubuntu 12.04 Ubuntu is easy to install and has a great user interface for linux systems. Ubuntu 12.0.4 is the most recent Long-Term-Stable release. Installing System Libraries The following instructions worked on Ubuntu 12.04 LTS. To install NetCDF libraries that allow reading, writing and manipulation, use apt-get, rather than downloading the source files and installing them yourself. To install, open a terminal and type: sudo apt-get install netcdf Installing R package ‘ncdf’ The base version of R on Ubuntu 12.04 slt is 2.14.1. Unfortunately, clicking the install button in RStudio and typing 'ncdf' will only work at the user level. The package will not be installed for all users or even show up in all of your RStudio projects. To install ncdf tools in the global library you must start up R as root and use the following command: install.packages(repos=c('http://cran.fhcrc.org/'),pkgs=c('ncdf'),lib="/usr/lib/R/site-library/") ‘http://cran.fhcrc.org/’ should be replaced by whichever CRAN mirror is closest to you. OSX 10.8.4 Macs run OSX, which is Unix based. The following instructions worked on OSX 10.8.4 — Mountain Lion. Installing System Libraries The absolute easiest way to install NetCDF on a mac requires Macports. Macports is a software package designed to make installing and compiling software easy. Macports .pkg and installation instructions are available here. Once Macports is installed, building and installing NetCDF libraries is a one-step job. sudo port install netcdf More details and instructions for installing Fortran and Python APIs are available here. Installing R Package ‘ncdf’ Having NetCDF command line tools is not required to use the ncdf R package. Simply download the package from CRAN (link), or by clicking on the “install packages” button in RStudio. This package allows reading, writing and manipulation of existing .nc files. However, the package’s ability to view the content of nc files before loading them into the R workspace is limited. For this reason, installing the NetCDF tools outlined in the first section of this post is extremely important. Command line tools such as “ncdump” are crucial to effectively working with NetCDF files.
September 30, 2013
by Jonathan Callahan
· 10,216 Views
article thumbnail
ElasticSearch: Java API
ElasticSearch provides Java API, thus it executes all operations asynchronously by using client object.
September 30, 2013
by Hüseyin Akdoğan DZone Core CORE
· 137,232 Views · 4 Likes
article thumbnail
Parallel SQL in C#
So, I’ve been wanting to get back to playing with C# for a while, and finally have had the opportunity. I’ve also been wanting to play with the Task library in .NET and see if I could get it to do something interesting, well below is the result. The code below, running in a .NET 4 project, will run two SQL SELECT statements against the AdventureWorks2012 database. There are three tasks in here, ParallelTask 1 and 2, and a timing task. The Parallel task takes a Connection String and a query as inputs, and passes out a Status Message. One of the important points with a task is that the task has to be self contained. This is why the connection is instantiated within the task. I also added in a Timing task (ParallelTiming) so I could pass out a ping message. The whole thing is controlled by the code in the main section, which is used to start the three tasks, with their appropriate parameters. After this it awaits the tasks completing, then passes out the resulting return messages. Try it out; it’s good fun and all you need is SQL Server, AdventureWorks and something to build C# projects. You can download the code here Have fun! /// Parallel_SQL demonstration code /// From Nick Haslam /// http://blog.nhaslam.com /// 16/9/2013 using System; using System.Collections.Generic; using System.Data.SqlClient; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Parallel_SQL { class Program { /// /// First Parallel task /// ///Connection string details ///Query to execute ///Status message to pass back /// static Task ParallelTask1(string sConnString, string sQuery, Action StatusMessage) { return Task.Factory.StartNew(() => { SqlConnection conn = new SqlConnection(sConnString); conn.Open(); StatusMessage(“Running Query”); SqlDataReader reader = null; SqlCommand sqlCommand = new SqlCommand(sQuery, conn); reader = sqlCommand.ExecuteReader(); while (reader.Read()) { StatusMessage(reader[0].ToString()); } return “Task 1 Complete”; }); } /// /// Second Parallel task /// ///Connection string details ///Query to execute ///Status message to pass back /// static Task ParallelTask2(string sConnString, string sQuery, Action StatusMessage) { return Task.Factory.StartNew(() => { SqlConnection conn = new SqlConnection(sConnString); conn.Open(); StatusMessage(“Running Query”); SqlDataReader reader = null; SqlCommand sqlCommand = new SqlCommand(sQuery, conn); reader = sqlCommand.ExecuteReader(); while (reader.Read()) { StatusMessage(reader[0].ToString()); } return “Task 2 Complete”; }); } /// /// Timing Task /// ///Milliseconds between ping ///Status message to pass back /// static Task ParallelTiming(int iMSPause, Action StatusMessage) { return Task.Factory.StartNew(() => { for (int i = 0; i < 10; i++) { System.Threading.Thread.Sleep(iMSPause); StatusMessage(“******************** PING ********************”); } return “Timing task done”; }); } static void Main(string[] args) { string sConnString = “server=.; Trusted_Connection=yes; database=AdventureWorks2012;”; try { var Task1Control = ParallelTask1(sConnString, “SELECT top 500 TransactionID FROM Production.TransactionHistory”, (update) => { Console.WriteLine(String.Format(“{0} – {1}”, DateTime.Now, update)); }); var Task2Control = ParallelTask2(sConnString, “SELECT top 500 SalesOrderDetailID FROM sales.SalesOrderDetail”, (update) => { Console.WriteLine(String.Format(“{0} – \t\t{1}”, DateTime.Now, update)); }); var TimingTaskControl = ParallelTiming(250, (update) => { Console.WriteLine(String.Format(“{0} – \t\t\t{1}”, DateTime.Now, update)); }); // Await Completion of the tasks Console.WriteLine(“Task 1 Status – {0}”, Task1Control.Result); Console.WriteLine(“Task 2 Status – {0}”, Task2Control.Result); Console.WriteLine(“Timing Task Status – {0}”, TimingTaskControl.Result); } catch (Exception e) { Console.WriteLine(e.ToString()); } Console.ReadKey(); } } }
September 29, 2013
by Nick Haslam
· 22,050 Views · 23 Likes
article thumbnail
Clojure: Converting an Array/Set into a Hash Map
When I was implementing the Elo Rating algorithm a few weeks ago one thing I needed to do was come up with a base ranking for each team. I started out with a set of teams that looked like this: (def teams #{ "Man Utd" "Man City" "Arsenal" "Chelsea"}) and I wanted to transform that into a map from the team to their ranking e.g. Man Utd -> {:points 1200} Man City -> {:points 1200} Arsenal -> {:points 1200} Chelsea -> {:points 1200} I had read the documentation of array-map, a function which can be used to transform a collection of pairs into a map, and it seemed like it might do the trick. I started out by building an array of pairs using mapcat: > (mapcat (fn [x] [x {:points 1200}]) teams) ("Chelsea" {:points 1200} "Man City" {:points 1200} "Arsenal" {:points 1200} "Man Utd" {:points 1200}) array-map constructs a map from pairs of values e.g. > (array-map "Chelsea" {:points 1200} "Man City" {:points 1200} "Arsenal" {:points 1200} "Man Utd" {:points 1200}) ("Chelsea" {:points 1200} "Man City" {:points 1200} "Arsenal" {:points 1200} "Man Utd" {:points 1200}) Since we have a collection of pairs rather than individual pairs we need to use the apply function as well: > (apply array-map ["Chelsea" {:points 1200} "Man City" {:points 1200} "Arsenal" {:points 1200} "Man Utd" {:points 1200}]) {"Chelsea" {:points 1200}, "Man City" {:points 1200}, "Arsenal" {:points 1200}, "Man Utd" {:points 1200} And if we put it all together we end up with the following: > (apply array-map (mapcat (fn [x] [x {:points 1200}]) teams)) {"Man Utd" {:points 1200}, "Man City" {:points 1200}, "Arsenal" {:points 1200}, "Chelsea" {:points 1200} It works but the function we pass to mapcat feels a bit clunky. Since we just need to create a collection of team/ranking pairs we can use the vector and repeat functions to build that up instead: > (mapcat vector teams (repeat {:points 1200})) ("Chelsea" {:points 1200} "Man City" {:points 1200} "Arsenal" {:points 1200} "Man Utd" {:points 1200}) And if we put the apply array-map code back in we still get the desired result: > (apply array-map (mapcat vector teams (repeat {:points 1200}))) {"Chelsea" {:points 1200}, "Man City" {:points 1200}, "Arsenal" {:points 1200}, "Man Utd" {:points 1200} Alternatively we could use assoc like this: > (apply assoc {} (mapcat vector teams (repeat {:points 1200}))) {"Man Utd" {:points 1200}, "Arsenal" {:points 1200}, "Man City" {:points 1200}, "Chelsea" {:points 1200} I also came across the into function which seemed useful but took in a collection of vectors: > (into {} [["Chelsea" {:points 1200}] ["Man City" {:points 1200}] ["Arsenal" {:points 1200}] ["Man Utd" {:points 1200}] ]) We therefore need to change the code to use map instead of mapcat: > (into {} (map vector teams (repeat {:points 1200}))) {"Chelsea" {:points 1200}, "Man City" {:points 1200}, "Arsenal" {:points 1200}, "Man Utd" {:points 1200} However, my favourite version so far uses the zipmap function like so: > (zipmap teams (repeat {:points 1200})) {"Man Utd" {:points 1200}, "Arsenal" {:points 1200}, "Man City" {:points 1200}, "Chelsea" {:points 1200} I’m sure there are other ways to do this as well so if you know any let me know in the comments.
September 28, 2013
by Mark Needham
· 12,885 Views
article thumbnail
TestNG @BeforeClass Annotation Example
TestNG method that is annotated with @BeforeClass annotation will be run before the first test method in the current class is invoked.
September 28, 2013
by Jagadeesh Motamarri
· 45,297 Views · 3 Likes
article thumbnail
Creating Custom JavaFX Components with Scene Builder and FXML.
one of the goals of our development with our javafx application is to try to keep as much of the ui design as possible within the scene builder ui design tool , and the business logic for the application in java. one of the things that is not entirely clear is how to create a new component in scene builder and then reuse that component within other components created in scene builder. in the example below, i illustrate how to create a custom table and ‘add plan’ components, and then add them to a new scene using fxml. the first step is to create a simple component which acts as an “add plan” widget on the ui. the widget contains an anchorpane, imageview and label. for the next step, open up the fxml file and change the first “anchorpane” declaration to “ create a class which will act as both the root of the component as well as its controller. the class must extend the type that was previously defined in the fxml file. use the fxml loader to set the root and controller of the component to “this” class, and then load the component’s fxml file. /* * to change this template, choose tools | templates * and open the template in the editor. */ package com.lynden.fx.test; import com.lynden.ui.util.uiutilities; import java.io.ioexception; import javafx.event.eventhandler; import javafx.fxml.fxml; import javafx.fxml.fxmlloader; import javafx.scene.input.dragevent; import javafx.scene.input.dragboard; import javafx.scene.input.transfermode; import javafx.scene.layout.anchorpane;/** * * @author robt */ public class testbutton extends anchorpane { @fxml private anchorpane mytestbutton; public testbutton() { fxmlloader fxmlloader = new fxmlloader( getclass().getresource("/com/lynden/planning/ui/testbutton.fxml")); fxmlloader.setroot(this); fxmlloader.setcontroller(this); try { fxmlloader.load(); } catch (ioexception exception) { throw new runtimeexception(exception); } } } next, the 2nd component is a tableview which contains a collection of beans, and displays their corresponding data. as with the last component, the first declaration is changed to next, the corresponding root and controller class is created for the table component. /* * to change this template, choose tools | templates * and open the template in the editor. */ package com.lynden.fx.test; import com.lynden.fx.inboundbean; import java.io.ioexception; import javafx.event.eventhandler; import javafx.event.eventtype; import javafx.fxml.fxml; import javafx.fxml.fxmlloader; import javafx.scene.control.tableview; import javafx.scene.input.clipboardcontent; import javafx.scene.input.dragboard; import javafx.scene.input.mouseevent; import javafx.scene.input.transfermode; import javafx.scene.layout.anchorpane; /** * * @author robt */ public class testtable extends anchorpane { @fxml private tableview mytableview; public testtable() { fxmlloader fxmlloader = new fxmlloader( getclass().getresource("/com/lynden/planning/ui/testtable.fxml")); fxmlloader.setroot(this); fxmlloader.setcontroller(this); try { fxmlloader.load(); } catch (ioexception exception) { throw new runtimeexception(exception); } } } finally, a new scene can be created which includes both the testtable and testbutton components that were constructed above. unfortunately, custom components can’t be added to the scene builder palette, so they must be inserted manually into the fxml file. you just need to ensure that you have the proper import statements defined, and the testtable and testbutton components can be inserted into the fxml code just as any native javafx component. when scene builder opens the fxml file you will see that both components are displayed on the new scene that’s it, hopefully this will help others who have been looking at adding custom components to their uis that are designed with scene builder. twitter: @robterp
September 25, 2013
by Rob Terpilowski
· 60,803 Views · 1 Like
article thumbnail
Connecting to SQL Azure with SQL Management Studio
Intro If you want to manage your SQL Databases in Azure using tools that you’re a little more familiar and comfortable with – for example – SQL Management Studio, how do you go about connecting? You could read the help article from Microsoft, or you can follow my intuitive screen-based instructions, below: Assumptions 1. I’m assuming you have a version of SQL Management Studio already installed. I believe you’ll need at least SQL Server 2008 R2’s version or newer 2. I’m further assuming you’ve already created a SQL Database in Azure Steps to Connect SSMS to SQL Azure 1. Authenticate to the Azure Portal 2. Click on SQL Databases 3. Click on Servers 4. Click on the name of the Server you wish to connect to… 5. Click on Configure… If not already in place, click on ‘Add to the allowed IP addresses’ to add your current IP address (or specify an address you wish to connect from) and click ‘Save’ 6. Open SQL Management Studio and connect to Database services (usually comes up by default) Enter the fully qualified server name (.database.windows.net) Change to SQL Server Authentication Enter the login preferred (if a new database, the username you specified when yuo created the DB server) Enter the correct password 7. Hit the Connect button Troubleshooting Ensure you have the appropriate ports open outbound from your local network or connection (typically port 1433) Ensure you have allowed the correct public IP address you’re trying to connect from via the Azure Portal (steps 1-5 above) Ensure you are using the correct server name and user name For SSMS, this is the server name (in step 4) followed by .database.windows.net Ensure you are using SQL Server Authentication For SSMS the username format is If you forgot the password of your username, you can reset the password in the Azure Portal, in step 4, click on Dashboard: Lastly… You can click on the Database (in step 2) to see your connection options:
September 25, 2013
by Rob Sanders
· 262,687 Views
article thumbnail
Injecting Spring beans into non-managed objects
Advantages coming from dependency injection can be addictive. It's a lot easier to configure application structure using injections than doing all resolutions manually. It's hard to resign from it when we have some non-managed classes that are instantiated outside of the container - for example being part of other frameworks like Vaadin UI components or JPA entities. The latter are especially important when we're using Domain Driven Design. During DDD training ran by Slawek Sobotka we were talking about options to remove "bad coupling" from aggregate factories. I'm sure you'll admit, that it's better to have generic mechanism able to "energize" objects by dependencies defined by e.g. @Inject annotation than inject all necessary dependencies into particular factory and then pass them into object by constructor, builder or simple setters. Spring framework brings us two different solutions to achieve such requirement. I'll now describe them both of them. Let's start with with simpler one. It's especially powerful when we've case such as generic repository, mentioned earlier aggregate factory, or even any other factory just ensuring that we have only few places in our code where will instantiate objects outside of the container. In this case we can make use of AutowireCapableBeanFactory class. In particular we will be interested in two methods: void autowireBean(Object existingBean) Object initializeBean(Object existingBean, String beanName) The first one just populates our bean without applying specific post processors (e.g. @PostConstruct, etc). The second one additionally applies factory callbacks like setBeanName and setBeanFactory, as well as any other post processors with @PostConstruct of course. In our code it'll look like this: public abstract class GenericFactory { @Autowired private AutowireCapableBeanFactory autowireBeanFactory; public T createBean() { // creation logic autowireBeanFactory.autowireBean(createdBean); return createdBean; } } Simple and powerful - my favorite composition :) But what can we do when we have plenty places in our code where objects get born? That's for example case of building Vaadin layouts in Spring web application. Injecting custom bean configurer objects invoking autowireBean method won't be the peak of productivity. Happily Spring developers brought us @Configurable annotation. This annotation connected with aspects will configure each annotated object even if we will create it outside of the container using new operator. Like with any other aspects we can choose between load-time-waving (LTW) compile-time-waving (CTW). The first one is easier to configure but we become (due to instrumentation) dependent from application server, which can be undesirable in some cases. To use it we need to annotate our @Configuration class by @EnableLoadTimeWeaving (or add tag if you like Flintstones and XML configuration ;)) After completing configuration just annotate class by @Configurable: @Configurable public class MyCustomButton extends Button { @Autowired private MyAwesomeService myAwesomeService; // handlers making use of injected service } The second option is a little bit more complex to setup but after this it will be a lot lighter in runtime. Because now we want to load aspects till compilation we have to integrate aspectj compiler into our build. In Maven you need to add few dependencies: org.aspectj aspectjrt 1.7.3 org.springframework spring-aspects 3.2.4.RELEASE org.springframework spring-tx 3.2.4.RELEASE javax.persistence persistence-api 1.0 provided I hope you are curious why above you can see persistence-api. That was also strange for me, when I saw "can't determine annotations of missing type javax.persistence.Entity" error during aspectj compilation. The answer can be found in SpringFramework JIRA in issue SPR-6819. This happens when you configure spring-aspects as a aspectLibrary in aspectj-maven-plugin. Issue is unresolved for over three year so better get used to it :) The last thing we need to do is to include above-mentioned plugin into our plugins section. org.codehaus.mojo aspectj-maven-plugin 1.5 1.7 1.7 1.7 true org.springframework spring-aspects compile And that's all folks :)
September 24, 2013
by Jakub Kubrynski
· 23,188 Views · 1 Like
  • Previous
  • ...
  • 769
  • 770
  • 771
  • 772
  • 773
  • 774
  • 775
  • 776
  • 777
  • 778
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: