DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

John Ferguson Smart

Principal Consultant at Wakaleo Consulting

London, GB

Joined Jun 2005

http://johnfergusonsmart.com

About

John Ferguson Smart is an experienced author, speaker and trainer specialising in Agile Delivery Practices, currently based in London. An international speaker well known in the Agile community for his many published articles and presentations, particularly in areas such as BDD, TDD, test automation, software craftsmanship and team collaboration, John helps organisations and teams around the world deliver better software sooner and more effectively both through more effective collaboration and communication techniques, and through better technical practices. John is also the author of 'BDD in Action', 'Jenkins: The Definitive Guide', and 'Java Power Tools', and lead developer of the Serenity BDD test automation library.

Stats

Reputation: 69
Pageviews: 733.5K
Articles: 9
Comments: 59
  • Articles
  • Comments

Articles

article thumbnail
An Introduction to BDD Test Automation with Serenity and JUnit
serenity bdd (previously known as thucydides ) is an open source reporting library that helps you write better structured, more maintainable automated acceptance criteria, and also produces rich meaningful test reports (or "living documentation") that not only report on the test results, but also what features have been tested. and for when your automated acceptance tests exercise a web interface, serenity comes with a host of features that make writing your automated web tests easier and faster. 1. bdd fundamentals but before we get into the nitty-gritty details, let’s talk about behaviour driven development, which is a core concept underlying many of serenity’s features. behaviour driven development, or bdd, is an approach where teams use conversations around concrete examples to build up a shared understanding of the features they are supposed to build. for example, suppose you are building a site where artists and craftspeople can sell their good online. one important feature for such a site would be the search feature. you might express this feature using a story-card format commonly used in agile projects like this: in order for buyers to find what they are looking for more efficiently as a seller i want buyers to be able to search for articles by keywords to build up a shared understanding of this requirement, you could talk through a few concrete examples. the converstaion might go something like this: "so give me an example of how a search might work." "well, if i search for wool , then i should see only woolen products." "sound’s simple enough. are there any other variations on the search feature that would produce different outcomes?" "well, i could also filter the search results; for example, i could look for only handmade woolen products." and so on. in practice, many of the examples that get discussed become "acceptance criteria" for the features. and many of these acceptance criteria become automated acceptance tests. automating acceptence tests provides valuable feedback to the whole team, as these tests, unlike unit and integrationt tests, are typically expressed in business terms, and can be easily understood by non-developers. and, as we will se later on in this article, the reports that are produced when these teste are executed give a clear picture of the state of the application. 2. serenity bdd and junit in this article, we will learn how to use serenity bdd using nothing more than junit, serenity bdd, and a little selenium webdriver. automated acceptance tests can use more specialized bdd tools such as cucumber or jbehave, but many teams like to keep it simple, and use more conventional unit testing tools like junit. this is fine: the essence of the bdd approach lies in the conversations that the teams have to discuss the requirements and discover the acceptance criteria. 2.1. writing the acceptance test let’s start off with a simple example. the first example that was discussed was searching for wool . the corresponding automated acceptance test for this example in junit looks like this: @runwith(serenityrunner.class) public class whensearchingbykeyword { @managed(driver="chrome", uniquesession = true) webdriver driver; @steps buyersteps buyer; @test public void should_see_a_list_of_items_related_to_the_specified_keyword() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); // then. buyer.should_see_items_related_to("wool"); } } the serenity test runner sets up the test and records the test results this is a web test, and serenity will manage the webdriver driver for us we hide implementation details about how the test will be executed in a "step library" our test itself is reduced to the bare essential business logic that we want to demonstrate there are several things to point out here. when you use serenity with junit, you need to use the serenityrunner test runner. this instruments the junit class and instantiates the webdriver driver (if it is a web test), as well as any step libraries and page objects that you use in your test (more on these later). the @managed annotation tells serenity that this is a web test. serenity takes care of instantiating the webdriver instance, opening the browser, and shutting it down at the end of the test. you can also use this annotation to specify what browser you want to use, or if you want to keep the browser open during all of the tests in this test case. the @steps annotation tells serenity that this variable is a step library. in serenity, we use step libraries to add a layer of abstraction between the "what" and the "how" of our acceptance tests. at the top level, the step methods document "what" the acceptance test is doing, in fairly implementation-neutral, business-friendly terms. so we say "searches for items containing wool ", not "enters wool into the search field and clicks on the search button". this layered approach makes the tests both easier to understand and to maintain, and helps build up a great library of reusable business-level steps that we can use in other tests. 2.2. the step library the step library class is just an ordinary java class, with methods annotated with the @step annotation: public class buyersteps { homepage homepage; searchresultspage searchresultspage; @step public void opens_etsy_home_page() { homepage.open(); } @step public void searches_for_items_containing(string keywords) { homepage.searchfor(keywords); } @step public void should_see_items_related_to(string keywords) { list resulttitles = searchresultspage.getresulttitles(); resulttitles.stream().foreach(title -> assertthat(title.contains(keywords))); } } //end:tail step libraries often use page objects, which are automatically instantiated the @step annotation indicates a method that will appear as a step in the test reports for automated web tests, the step library methods do not call webdriver directly, but rather they typically interact with page objects . 2.3. the page objects page objects encapsulate how a test interacts with a particular web page. they hide the webdriver implementation details about how elements on a page are accessed and manipulated behind more business-friendly methods. like steps, page objects are reusable components that make the tests easier to understand and to maintain. serenity automatically instantiates page objects for you, and injects the current webdriver instance. all you need to worry about is the webdriver code that interacts with the page. and serenity provides a few shortcuts to make this easier as well. for example, here is the page object for the home page: @defaulturl("http://www.etsy.com") public class homepage extends pageobject { @findby(css = "button[value='search']") webelement searchbutton; public void searchfor(string keywords) { $("#search-query").sendkeys(keywords); searchbutton.click(); } } what url should be used by default when we call the open() method a serenity page object must extend the pageobject class you can use the $ method to access elements directly using css or xpath expressions or you may use a member variable annotated with the @findby annotation and here is the second page object we use: public class searchresultspage extends pageobject { @findby(css=".listing-card") list listingcards; public list getresulttitles() { return listingcards.stream() .map(element -> element.gettext()) .collect(collectors.tolist()); } } in both cases, we are hiding the webdriver implementation of how we access the page elements inside the page object methods. this makes the code both easier to read and reduces the places you need to change if a page is modified. this approach encourages a very high degree of reuse. for example, the second example mentioned at the start of this article involved filtering results by type. the corresponding automated acceptance criteria might look like this: @test public void should_be_able_to_filter_by_item_type() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); int unfiltereditemcount = buyer.get_matching_item_count(); // and buyer.filters_results_by_type("handmade"); // then buyer.should_see_items_related_to("wool"); // and buyer.should_see_item_count(lessthan(unfiltereditemcount)); } @test public void should_be_able_to_view_details_about_a_searched_item() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); buyer.selects_item_number(5); // then buyer.should_see_matching_details(); } notice how most of the methods here are reused from the previous steps: in fact, only two new methods are required. 3. reporting and living documentation reporting is one of serenity’s fortes. serenity not only reports on whether a test passes or fails, but documents what it did, in a step-by-step narrative format that inculdes test data and screenshots for web tests. for example, the following page illustrates the test results for our first acceptance criteria: figure 1. test results reported in serenity but test outcomes are only part of the picture. it is also important to know what work has been done, and what is work in progress. serenity provides the @pending annotation, that lets you indicate that a scenario is not yet completed, but has been scheduled for work, as illustrated here: @runwith(serenityrunner.class) public class whenputtingitemsintheshoppingcart { @pending @test public void shouldupdateshippingpricefordifferentdestinationcountries() { } } this test will appear in the reports as pending (blue in the graphs): figure 2. test result overview we can also organize our acceptance tests in terms of the features or requirements they are testing. one simple approach is to organize your requirements in suitably-named packages: |----net | |----serenity_bdd | | |----samples | | | |----etsy | | | | |----features | | | | | |----search | | | | | | |----whensearchingbykeyword.java | | | | | | |----whenviewingitemdetails.java | | | | | |----shopping_cart | | | | | | |----whenputtingitemsintheshoppingcart.java | | | | |----pages | | | | | |----homepage.java | | | | | |----itemdetailspage.java | | | | | |----registerpage.java | | | | | |----searchresultspage.java | | | | | |----shoppingcartpage.java | | | | |----steps | | | | | |----buyersteps.java all the test cases are organized under the features directory. test cass related to the search feature test cases related to the ‘shopping cart’ feature serenity can use this package structure to group and aggregate the test results for each feature. you need to tell serenity the root package that you are using, and what terms you use for your requirements. you do this in a special file called (for historical reasons) thucydides.properties , which lives in the root directory of your project: thucydides.test.root=net.serenity_bdd.samples.etsy.features thucydides.requirement.types=feature,story with this configured, serenity will report about how well each requirement has been tested, and will also tell you about the requirements that have not been tested: figure 3. serenity reports on requirements as well as tests 4. conclusion hopefully this will be enough to get you started with serenity. that said, we have barely scratched the surface of what serenity can do for your automated acceptance tests. you can read more about serenity, and the principles behind it, by reading the users manual , or by reading bdd in action , which devotes several chapters to these practices. and be sure to check out the online courses at parleys . you can get the source code for the project discussed in this article on github .
December 12, 2014
· 59,290 Views · 6 Likes
article thumbnail
Data-driven Unit Testing in Java
Data-driven testing is a powerful way of testing a given scenario with different combinations of values. In this article, we look at several ways to do data-driven unit testing in JUnit. Suppose, for example, you are implementing a Frequent Flyer application that awards status levels (Bronze, Silver, Gold, Platinum) based on the number of status points you earn. The number of points needed for each level is shown here: level minimum status points result level Bronze 0 Bronze Bronze 300 Silver Bronze 700 Gold Bronze 1500 Platinum Our unit tests need to check that we can correctly calculate the status level achieved when a frequent flyer earns a certain number of points. This is a classic problem where data-driven tests would provide an elegant, efficient solution. Data-driven testing is well-supported in modern JVM unit testing libraries such as Spock and Spec2. However, some teams don’t have the option of using a language other than Java, or are limited to using JUnit. In this article, we look at a few options for data-driven testing in plain old JUnit. Parameterized Tests in JUnit JUnit provides some support for data-driven tests, via the Parameterized test runner. A simple data-driven test in JUnit using this approach might look like this: @RunWith(Parameterized.class) public class WhenEarningStatus { @Parameters(name = "{index}: {0} initially had {1} points, earns {2} points, should become {3} ") public static Iterable data() { return Arrays.asList(new Object[][]{ {Bronze, 0, 100, Bronze}, {Bronze, 0, 300, Silver}, {Bronze, 100, 200, Silver}, {Bronze, 0, 700, Gold}, {Bronze, 0, 1500, Platinum}, }); } private Status initialStatus; private int initialPoints; private int earnedPoints; private Status finalStatus; public WhenEarningStatus(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) { this.initialStatus = initialStatus; this.initialPoints = initialPoints; this.earnedPoints = earnedPoints; this.finalStatus = finalStatus; } @Test public void shouldUpgradeStatusBasedOnPointsEarned() { FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678") .named("Joe", "Jones") .withStatusPoints(initialPoints) .withStatus(initialStatus); member.earns(earnedPoints).statusPoints(); assertThat(member.getStatus()).isEqualTo(finalStatus); } } You provide the test data in the form of a list of Object arrays, identified by the _@Parameterized@ annotation. These object arrays contain the rows of test data that you use for your data-driven test. Each row is used to instantiate member variables of the class, via the constructor. When you run the test, JUnit will instantiate and run a test for each row of data. You can use the name attribute of the @Parameterized annotation to provide a more meaningful title for each test. There are a few limitations to the JUnit parameterized tests. The most important is that, since the test data is defined at a class level and not at a test level, you can only have one set of test data per test class. Not to mention that the code is somewhat cluttered - you need to define member variables, a constructor, and so forth. Fortunatly, there is a better option. Using JUnitParams A more elegant way to do data-driven testing in JUnit is to use [https://code.google.com/p/junitparams/|JUnitParams]. JUnitParams (see [http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22JUnitParams%22|Maven Central] to find the latest version) is an open source library that makes data-driven testing in JUnit easier and more explicit. A simple data-driven test using JUnitParam looks like this: @RunWith(JUnitParamsRunner.class) public class WhenEarningStatusWithJUnitParams { @Test @Parameters({ "Bronze, 0, 100, Bronze", "Bronze, 0, 300, Silver", "Bronze, 100, 200, Silver", "Bronze, 0, 700, Gold", "Bronze, 0, 1500, Platinum" }) public void shouldUpgradeStatusBasedOnPointsEarned(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) { FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678") .named("Joe", "Jones") .withStatusPoints(initialPoints) .withStatus(initialStatus); member.earns(earnedPoints).statusPoints(); assertThat(member.getStatus()).isEqualTo(finalStatus); } } Test data is defined in the @Parameters annotation, which is associated with the test itself, not the class, and passed to the test via method parameters. This makes it possible to have different sets of test data for different tests in the same class, or mixing data-driven tests with normal tests in the same class, which is a much more logical way of organizing your classes. JUnitParam also lets you get test data from other methods, as illustrated here: @Test @Parameters(method = "sampleData") public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) { FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678") .named("Joe", "Jones") .withStatusPoints(initialPoints) .withStatus(initialStatus); member.earns(earnedPoints).statusPoints(); assertThat(member.getStatus()).isEqualTo(finalStatus); } private Object[] sampleData() { return $( $(Bronze, 0, 100, Bronze), $(Bronze, 0, 300, Silver), $(Bronze, 100, 200, Silver) ); } The $ method provides a convenient short-hand to convert test data to the Object arrays that need to be returned. You can also externalize @Test @Parameters(source=StatusTestData.class) public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus,int initialPoints, int earnedPoints,Status finalStatus){ ... } The test data here comes from a method in the StatusTestData class: public class StatusTestData{ public static Object[] provideEarnedPointsTable(){ return $( $(Bronze,0, 100,Bronze), $(Bronze,0, 300,Silver), $(Bronze,100,200,Silver) ); } } This method needs to be static, return an object array, and start with the word "provide". Getting test data from external methods or classes in this way opens the way to retrieving test data from external sources such as CSV or Excel files. JUnitParam provides a simple and clean way to implement data-driven tests in JUnit, without the overhead and limitations of the traditional JUnit parameterized tests. Testing with non-Java languages If you are not constrained to Java and/or JUnit, more modern tools such as Spock (https://code.google.com/p/spock/) and Spec2 provide great ways of writing clean, expressive unit tests in Groovy and Scala respectively. In Groovy, for example, you could write a test like the following: class WhenEarningStatus extends Specification{ def"should earn status based on the number of points earned"(){ given: def member =FrequentFlyer.withFrequentFlyerNumber("12345678") .named("Joe","Jones") .withStatusPoints(initialPoints) .withStatus(initialStatus); when: member.earns(earnedPoints).statusPoints() then: member.status == finalStatus where: initialStatus | initialPoints | earnedPoints | finalStatus Bronze |0 |100 |Bronze Bronze |0 |300 |Silver Bronze |100 |200 |Silver Silver |0 |700 |Gold Gold |0 |1500 |Platinum } } John Ferguson Smart is a specialist in BDD, automated testing, and software life cycle development optimization, and author of BDD in Action and other books. John runsregular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, andAutomated Acceptance Testing. Blog Links >>
July 27, 2014
· 23,971 Views · 1 Like
article thumbnail
Using BDD with web-services: a tutorial using JBehave and Thucydides
behavior driven development (bdd) is an approach that uses conversions around concrete examples to discover, describe and formalize the behavior of a system. bdd tools such as jbehave and cucumber are often used for writing web-based automated acceptance testing. but bdd is also an excellent approach to adopt if you need to design a web service. in this article, we will see how you can use jbehave and thucydides to express and to automate clear, meaningful acceptance criteria for a restful web service. (the general approach would also work for a web service using soap.) we will also see how the reports (or "living documentation", in bdd terms) generated by these automated acceptance criteria also do a great job to document the web service. thucydides is an open source bdd reporting and acceptance testing library that is used in conjunction with other testing tools, either bdd testing frameworks such as jbehave , cucumber or specflow , or more traditional libraries such as junit. web services are easy to model and test using bdd techniques, in many ways more so than web applications. web services are (or should be) relatively easy to describe in behavioral terms. they accept a well-defined set of input parameters, and return a well-defined result. so they fit well into the typical bdd-style way of describing behavior, using given-when-then format: given some precondition when something happens then a particular outcome is expected during the rest of this article we will see how to describe and automate web service behavior in this way. to follow along, you will need java and maven installed on your machine (i used java 8 and maven 3.2.1). the source code is also available on github (https://github.com/thucydides-webtests/webservice-demo). if you want to build the project from scratch, first create a new thucydides/jbehave project from the command line like this: mvn archetype:generate -dfilter=thucydides-jbehave enter whatever artifact and group names you like: it doesn't make any difference for this example: choose archetype: 1: local -> net.thucydides:thucydides-jbehave-archetype (thucydides automated acceptance testing project using selenium 2, junit and jbehave) choose a number or apply filter (format: [groupid:]artifactid, case sensitive contains): : 1 define value for property 'groupid': : com.wakaleo.training.webservices define value for property 'artifactid': : bdddemo define value for property 'version': 1.0-snapshot: : 1.0.0-snapshot define value for property 'package': com.wakaleo.training.webservices: : confirm properties configuration: groupid: com.wakaleo.training.webservices artifactid: bdddemo version: 1.0.0-snapshot package: com.wakaleo.training.webservices y: : this will create a simple project set up with jbehave and thucydides. it is designed to test web applications, but it is easy enough to adapt to work with a restful web service. we don't need the demo code, so you can safely delete all of the java classes (except for the acceptancetestsuite class) and the jbehave .story files. now, update the pom.xml file to use the latest version of thucydides, e.g. utf-8 0.9.239 0.9.235 once you have done this, you need to define some stories and scenarios for your web service. to keep things simple in this example, we will be working with two simple requirements: shortening and expanding urls using google's url shortening service . we will describe these in two jbehave story files. create a stories directory under src/test/resources , and create a sub-directory for each requirement called expanding_urls and shortening_urls . each directory represents a high-level capability that we want to implement. inside these directories we place jbehave story files ( expanding_urls.story and shortening_urls.story ) for the features we need. (this structure is a little overkill in this case, but is useful for real-world project where the requirements are more numerous and more complex). this structure is shown here: [img_assist|nid=167149|title=|desc=|link=popup|align=left|width=600|height=234] the story files contain the bdd-style given-when-then scenarios that describe how the web service should behave. when you design a web service using bdd, you can express behavior at two levels (and many projects use both). the first approach is to describe the json data in the bdd scenarios, as illustrated here: scenario: shorten urls given a url http://www.google.com when i request the shortened form of this url then i should obtain the following json message: { "kind": "urlshortener#url", "id": "http://goo.gl/fbss", "longurl": "http://www.google.com/" } this works well if your scenarios have a very technical audience (i.e. if you are writing a web service purely for other developers), and if the json contents remain simple. it is also a good way to agree on the json format that the web sevice will produce. but if you need to discuss the scenario with business, bas or even testers, and/or if the json that you are returning is more complicated, putting json in the scenarios is not such a good idea. this approach also works poorly for soap-based web services where the xml message structure is more complex. a better approach in these situations is to describe the inputs and expected outcomes in business terms, and then to translate these into the appropriate json format within the step definition: scenario: shorten urls given a url when i request the shortened form of this url then the shortened form should be examples: | providedurl | expectedurl | | http://www.google.com/ | http://goo.gl/fbss | | http://www.amazon.com/ | http://goo.gl/xj57 | let's see how we would automate this scenario using jbehave and thucydides. first, we need to write jbehave step definitions in java for each of the given/when/then steps in the scenarios we just saw. create a class called processingurls next to the acceptancetestsuite class, or in a subdirectory underneath this class. [img_assist|nid=167151|title=|desc=|link=popup|align=left|width=600|height=240] the step definitions for this scenario are simple, and largely delegate to a class called urlshortenersteps to do the heavy-weight work. this approach make a cleaner separation of th what from the how , and makes reuse easier - for example, if we need to change underlying web service we used to implement the url shortening feature, these step definitions should remain unchanged: public class processingurls { string providedurl; string returnedmessage; @steps urlshortenersteps urlshortener; @given("a url ") public void givenaurl(string providedurl) { this.providedurl = providedurl; } @when("i request the shortened form of this url") public void shortenurl() { returnedmessage = urlshortener.shorten(providedurl); } @when("i request the expanded form of this url") public void expandurl() { returnedmessage = urlshortener.expand(providedurl); } @then("the shortened form should be ") public void shortenedformshouldbe(string expectedurl) throws jsonexception { urlshortener.response_should_contain_shortened_url(returnedmessage, expectedurl); } } now add the urlshortenersteps class. this class contains the actual test code that interacts with your web service we could use any java rest client for this, but here we are using the spring resttemplate . the full class looks like this: public class urlshortenersteps extends scenariosteps { resttemplate resttemplate; public urlshortenersteps() { resttemplate = new resttemplate(); } @step("longurl={0}") public string shorten(string providedurl) { map urlform = new hashmap(); urlform.put("longurl", providedurl); return resttemplate.postforobject("https://www.googleapis.com/urlshortener/v1/url", urlform, string.class); } @step("shorturl={0}") public string expand(string providedurl) { return resttemplate.getforobject("https://www.googleapis.com/urlshortener/v1/url?shorturl={shorturl}", string.class, providedurl); } @step public void response_should_contain_shortened_url(string returnedmessage, string expectedurl) throws jsonexception { string expectedjsonmessage = "{'id':'" + expectedurl + "'}"; jsonassert.assertequals(expectedjsonmessage, returnedmessage, jsoncomparemode.lenient); } @step public void response_should_contain_long_url(string returnedmessage, string expectedurl) throws jsonexception { string expectedjsonmessage = "{'longurl':'" + expectedurl + "'}"; jsonassert.assertequals(expectedjsonmessage, returnedmessage, jsoncomparemode.lenient); } } the spring resttemplate class is an easy way to interact with a web service with a minimum of fuss. in the shorten() method, we invoke the urlshortener web service using a post operation to shorten a url: @step("longurl={0}") public string shorten(string providedurl) { map urlform = new hashmap(); urlform.put("longurl", providedurl); return resttemplate.postforobject("https://www.googleapis.com/urlshortener/v1/url", urlform, string.class); } the expand service is even simpler to call, as it just uses a simple get operation: @step("shorturl={0}") public string expand(string providedurl) { return resttemplate.getforobject("https://www.googleapis.com/urlshortener/v1/url?shorturl={shorturl}", string.class, providedurl); } in both cases, we return the json document produced by the web service, and verify the contents in the then step using the jsonassert library. there are many libraries you can use to verify the json data returned from a web service. if you need to check the entire json structure, jsonassert provides a convenient api to do so. jsonassert lets you match json documents strictly (all the elements must match, in the right order), or leniently (you only specify a subset of the fields that need to appear in the json document, regardless of order). the following step checks that the json documet contains an id field with the expected url value. the full json document will appear in the reports because it is being passed as a parameter to this step. @step public void response_should_contain_shortened_url(string returnedmessage, string expectedurl) throws jsonexception { string expectedjsonmessage = "{'id':'" + expectedurl + "'}"; jsonassert.assertequals(expectedjsonmessage, returnedmessage, jsoncomparemode.lenient); } you can run these scenarios using mvn verify from the command line: this will produce the test reports and the thucydides living documentation for these scenarios. once you have run mvn verify , open the index.html file in the target/site/thucydides directory. this gives an overview of the test results. if you click on the requirements tab, you will see an overview of the results in terms of capabilities and features. we call this "feature coverage": drill down into the "shorten urls" test result. here you will see a summary of the story or feature illustrated by this scenario: [img_assist|nid=167153|title=|desc=|link=popup|align=none|width=600|height=324] and if you scroll down further, you will see the details of how this web service was tested, including the json document returned by the service: [img_assist|nid=167155|title=|desc=|link=popup|align=none|width=640|height=346] bdd is a great fit for developing and testing web services. if you want to learn more about bdd, be sure to check out the bdd , tdd and test automation workshops we are running in sydney and melbourne this may! john ferguson smart is a well-regarded consultant, coach, and trainer in technical agile practices based in sydney, australia. a prominent international figure in the domain of behaviour driven development, automated testing and software life cycle development optimisation, john helps organisations around the world to improve their agile development practices and to optimise their java development processes and infrastructures. he is the author of several books, most recently bdd in action for manning.
April 28, 2014
· 18,265 Views · 2 Likes
article thumbnail
Functional Test Coverage - taking BDD reporting to the next level
From an original article on Wakaleo.com Conventional test reports, generated by tools such as JUnit or TestNG, naturally focus on what tests have been executed, and whether they passed or failed. While this is certainly useful from a testing perspective, these reports are far from telling the whole picture. BDD reporting tools like Cucumber and JBehave take things a step further, introducing the concept of "pending" tests. A pending test is one that has been specified (for example, as an acceptance criteria for a user story), but which has not been implemented yet. In BDD, we describe the expected behaviour of our application using concrete examples, that eventually form the basis of the "acceptance criteria" for the user stories we are implementing. BDD tools such as Cucumber and JBehave not only report on test results: they also report on the user stories that these tests validate. However this reporting is still limited for large projects, where the numbers of user stories can become unwieldy. User stories are not created in isolation: rather, user stories help describe features, which support capabilities that need to be implemented to achieve the business goals of the application. So it makes sense to be able to report on test results not only at the user story level, but also at higher levels, for example in terms of features and capabilities. This makes it easier to report on not only what stories have been implemented, but also what features and capabilities remain to be done. An example of such a report is shown in Figure 1 (or see the full report here). Figure 1: A test coverage report listing both tested and untested requirements. In agile projects, it is generally considered that a user story is not complete until all of its automated acceptance tests pass. Similarly, a feature cannot be considered ready to deliver until all of the acceptance criteria for the underlying user stories have been specified and implemented. However, sensible teams shy away from trying to define all of the acceptance criteria up-front, leaving this until the "last responsible moment", often shortly before the user story is scheduled to be implemented. For this reason, reports that relate project progress and status only in terms of test results are missing out on the big picture. To get a more accurate idea of what features have been delivered, which ones are in progress, and what work remains to be done, we must think not in terms of test results, but in terms of the requirements as we currently understand them, matching the currently implemented tests to these requirements, but also pointing out what requirements currently have no acceptance criteria defined. And when graphs and reports illustrate how much progress has been made, the requirements with no acceptance criteria must also be part of the picture. Requirements-level BDD reporting with Thucydides Thucydides is an open source tool that puts some of these concepts into practice. Building on top of BDD tools such as JBehave, or using just ordinary JUnit tests, Thucydides reports not only on how the tests did, but also fits them into the broader picture, showing what requirements have been tested and, just as importantly, what requirements haven't. You can learn more about Thucydides in this tutorial or on the Thucydides website. During the rest of this article, we will see how to report on both your requirements and your test results using Thucydides, using a very simple directory-based approach. You can follow along with this example by cloning the Github project at https://github.com/thucydides-webtests/thucydides-simple-demo Simple requirements in Thucydides - a directory-based approach Thucydides can integrate with many different requirement management systems, and it is easy to write your own plugin to tailor the integration to suite your particular environment. A popular approach, for example, is to store requirements in JIRA and to use Thucydides to read the requirements hierarcy directly from the JIRA cards. However the simplest approach, which uses a directory-based approach, is probably the easiest to use to get started, and it is that approach that we will be looking at here. Requirements can usually be organized in a hierarchial structure. By default, Thucydides uses a three-level hierarchy of requirements. At the top level, capabilities represent a high-level capacity that the application must provide to meet the application's business goals. At the next level down, features help deliver these capabilities. To make implementation easier, a feature can be broken up into user stories, each of which in turn can contain a number of acceptance criteria. Figure 2: JUnit test directories mirror the requirements hierarchy. Of course, you don't have to use this structure if it doesn't suit you. You can override the thucydides.capability.types system property to provide your own hierarchy. For example, if you wanted a hierarchy with modules,epics, and features, you would just set thucydides.capability.types to "module,epic,feature". When we use the default directory-based requirements strategy in Thucydides, the requirements are stored in a hierarchial directory structure that matches the requirements hierarchy. At the lowest level, a user story is represented by a JBehave *.story file, an easyb story, or a JUnit test. All of the other requirements are represented as directories (see Figure 2 for an example of such a structure). In each requirements directory, you can optionally place a file called narrative.txt, which contains a free-text summary of the requirement. This will appear in the reports, with the first line appearing as the requirement title. A typical narrative text is illustrated in the following example: Learn the meaning of a word In order to learn the meaning of a word that I don't know As an online reader I want to be able to find out the meaning of the word If you are implementing the acceptance criteria as JUnit tests, just place the JUnit tests in the package that matches the correspoinding requirement. You need to use the thucydides.test.root system property to specify the root package of your requirements. For the example in Figure 2, this value should be set to nz.govt.nzqa.lssu.stories. Figure 3: The narrative.txt file appears in the reports to describe a requirement. If you are using JBehave, just place the *.story files in the src/test/resources/stories directory, again respecting a directory structure that corresponds to your requirement hierarchy. The narrative.txt files also work for JBehave requirements. Progress is measured by the total number of passing, failing or pending acceptance criteria, either for the whole project (at the top level), or within a particular requirement as you drill down the requirements hierarchy. For the purposes of reporting, a requirement with no acceptance criteria is attributed an arbitrary number of "imaginary" pending acceptance criteria. Thucydides considers that you need 4 tests per requirement by default, but you can override this value using the thucydides.estimated.tests.per.requirement system property. Figure 3: For JBehave, everything goes under src/test/resources/stories. Conclusion BDD is an excellent approach for communicating with, and reporting back to, stakeholders. However, for accurate acceptance test reporting on real-world projects, you need to go beyond the story level, and cater for the whole requirements hierarchy. In particular, you need to not only report on tests that have been executed, but also allow for the tests that haven't been written yet. Thucydides puts these concepts into practice: using a simple directory-based convention, you can easily integrate your requirements hierarcy into your acceptance tests.
January 15, 2013
· 34,337 Views · 1 Like
article thumbnail
An introduction to Spock
Spock is an open source testing framework for Java and Groovy that has been attracting a growing following, especially in the Groovy community. It lets you write concise, expressive tests, using a quite readable BDD-style notation. It even comes with its own mocking library built in. Oh. I thought he was a sci-fi character. Can I see an example? Sure. Here's a simple one from a coding kata I did recently: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def "I plus I should equal II"() { given: def calculator = new RomanCalculator() when: def result = calculator.add("I", "I") then: result == "II" } } In Spock, you don't have tests, you have specifications. These are normal Groovy classes that extend the Specifications class, which is actually a JUnit class. Your class contains a set of specifications, represented by methods with funny-method-names-in-quotes™. The funny-method-names-in-quotes™ take advantage of some Groovy magic to let you express your requirements in a very readable form. And since these classes are derived from JUnit, you can run them from within Eclipse like a normal Groovy unit test, and they produce standard JUnit reports, which is nice for CI servers. Another thing: notice the structure of this test? We are using given:, when: and then: to express actions and expected outcomes. This structure is common in Behaviour-Driven Development, or BDD, frameworks like Cucumber and easyb. Though Spock-style tests are generally more concise more technically-focused than tools like Cucumber and easyb, which are often used for automating acceptance tests. But I digress... Actually, the example I gave earlier was a bit terse. We could make our intent clearer by adding text descriptions after the when: and then: labels, as I've done here: def "I plus I should equal II"() { when: "I add two roman numbers together" def result = calculator.add("I", "I") then: "the result should be the roman number equivalent of their sum" result == "II" } This is an excellent of clarifying your ideas and documenting your API. But where are the AssertEquals statements? Aha! I'm glad you asked! Spock uses a feature called Power Asserts. The statement after the then: is your assert. If this test fails, Spock will display a detailed analysis of what went wrong, along the following lines: I plus I should equal II(com.wakaleo.training.spocktutorial.RomanCalculatorSpec) Time elapsed: 0.33 sec <<< FAILURE! Condition not satisfied: result == "II" | | I false 1 difference (50% similarity) I(-) I(I) at com.wakaleo.training.spocktutorial .RomanCalculatorSpec.I plus I should equal II(RomanCalculatorSpec.groovy:17) Nice! But in JUnit, I have @Before and @After for fixtures. Can I do that in Spock? Sure, but you don't use annotations. Instead you implement setup() and cleanup() methods (which are run before and after each specification). I've added one here to show you what they look like: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def calculator def setup() { calculator = new RomanCalculator() } def "I plus I should equal II"() { when: def result = calculator.add("I", "I") then: result == "II" } } You can also define a setupSpec() and cleanupSpec(), which are run just before the first test and just after the last one. I'm a big fan of parameterized tests in JUnit 4. Can I do that in Spock! You sure can! In fact it's one of Spock's killer features! def "The lowest number should go at the end"() { when: def result = calculator.add(a, b) then: result == sum where: a | b | sum "X" | "I" | "XI" "I" | "X" | "XI" "XX" | "I" | "XXI" "XX" | "II" | "XXII" "II" | "XX" | "XXII" } This code will run the test 5 times. The variables a, b, and sum are initialized from the rows in the table in the where: clause. And if any of the tests fail, you get That's pretty cool too. What about mocking? Can I use Mockito? Sure, if you want. but Spock actually comes with it's own mocking framework, which is pretty neat. You set up a mock or a stub using the Mock() method. I've shown two possible ways to use this method here: given: Subscriber subscriber1 = Mock() def subscriber2 = Mock(Subscriber) ... You can set these mocks up to behave in certain ways. Here are a few examples. You can say a method should return a certain value using the >> operator: subscriber1.isActive() >> true subscriber2.isActive() >> false Or you could get a method to throw an exception when it is called: subscriber.activate() >> { throw new BlacklistedSubscriberException() } Then you can test outcomes in a few different ways. Here is a more complicated example to show you some of your options: def "Messages published by the publisher should only be received by active subscribers"() { given: "a publisher" def publisher = new Publisher() and: "some active subscribers" Subscriber activeSubscriber1 = Mock() Subscriber activeSubscriber2 = Mock() activeSubscriber1.isActive() >> true activeSubscriber2.isActive() >> true publisher.add activeSubscriber1 publisher.add activeSubscriber2 and: "a deactivated subscriber" Subscriber deactivatedSubscriber = Mock() deactivatedSubscriber.isActive() >> false publisher.add deactivatedSubscriber when: "a message is published" publisher.publishMessage("Hi there") then: "the active subscribers should get the message" 1 * activeSubscriber1.receive("Hi there") 1 * activeSubscriber2.receive({ it.contains "Hi" }) and: "the deactivated subscriber didn't receive anything" 0 * deactivatedSubscriber.receive(_) } That does look neat. So what is the best place to use Spock? Spock is great for unit or integration testing of Groovy or Grails projects. On the other hand, tools like easyb amd cucumber are probably better for automated acceptance tests - the format is less technical and the reporting is more appropriate for non-developers. From http://www.wakaleo.com/blog/303-an-introduction-to-spock
November 4, 2010
· 37,799 Views · 4 Likes
article thumbnail
Running JUnit tests in Parallel with Maven
A little-known but very useful feature slipped into JUnit 4 and recent versions of the Maven Surefire Plugin: support for parallel testing.
July 7, 2010
· 55,435 Views · 1 Like
article thumbnail
Grouping Tests Using JUnit Categories
In a well-organized build process, you want lightning-fast unit tests to run first, and provide whatever feedback they can very quickly. A nice way to do this is to be able to class your tests into different categories. For example, this can make it easier to distinguish between faster running unit tests, and slower tests such as integration, performance, load or acceptance tests. This feature exists in TestNG, but, until recently, not in JUnit. Indeed, this has been missing from the JUnit world for a long time. Using JUnit, I typically use test names (integration tests end in 'IntegrationTest', for example) or packages to identify different types of test. It is easy to configure a build script using Maven or Ant to run different types of test at different points in the build lifecycle. However it would be nice to be able to do this in a more elegant manner. JUnit 4.8 introduced a new feature along these lines, called Categories. However, like most new JUnit features, it is almost entirely undocumented. In this article we'll see how it works and what it can do for you. In JUnit 4.8, you can define your own categories for your tests. Categories are implemented as classes or interfaces. Since they simply act as markers, I prefer to use interfaces. One such category interface might look like this: public interface IntegrationTests {} You can also use inheritance to organize your test categories: public interface SlowTests {} public interface IntegrationTests extends SlowTests {} public interface PerformanceTests extends SlowTests {} So far so good. Now you can use these categories in your tests. In this example we flag a particular test class as containing integration tests: @Category(IntegrationTests.class) public class AccountIntegrationTest { @Test public void thisTestWillTakeSomeTime() { ... } @Test public void thisTestWillTakeEvenLonger() { .... } } You can also flag individual test methods if you prefer: public class AccountTest { @Test @Category(IntegrationTests.class) public void thisTestWillTakeSomeTime() { ... } @Test @Category(IntegrationTests.class) public void thisTestWillTakeEvenLonger() { ... } @Test public void thisOneIsRealFast() { ... } } To run tests in a particular category, you need to set up a test suite. In JUnit 4, a test suite is essentially an empty annotated class. To run only tests in a particular category, you use the @Runwith(Categories.class) annotation, and specify what category you want to run using the @IncludeCategory annotation @RunWith(Categories.class) @IncludeCategory(SlowTests.class) @SuiteClasses( { AccountTest.class, ClientTest.class }) public class LongRunningTestSuite {} You can also ask JUnit not to run tests in a particular category using the @ExcludeCategory annotation @RunWith(Categories.class) @ExcludeCategory(SlowTests.class) @SuiteClasses( { AccountTest.class, ClientTest.class }) public class UnitTestSuite {} Test categories are great if you use JUnit test suites. I haven't used test suites for years: Maven can find all my tests by itself, thank you very much, so I don't have to remember to add my test classes to the right test suite each time a create a new one. However, test suites do give you finer control over what order your tests are executed in, so you might still find them useful in that regard. Once you've done this, it is then easy to run tests in a particular category from within your IDE simply by running the test suite. On the tooling and build automation side of things, JUnit categories are not supported as well as TestNG groups. For example, the Maven surefire plugin lets you specify the TestNG groups you want to run in a particular phase, but no such support exists as yet for JUnit categories. You can of course configure the Surefire plugin to run a particular test suite (or test suites) in a particular phase, but it doesn't dispense you with the need to write and maintain a test suite. So test categories are great, but having to run them via a test suite (and to remember to add new test classes to the test suite) seems a bit clunky in these days of annotations and reflection. From http://weblogs.java.net/blog/johnsmart/archive/2010/04/25/grouping-tests-using-junit-categories-0
April 26, 2010
· 21,390 Views
article thumbnail
Automated Deployment With Cargo and Maven - a Short Primer
Cargo is a versatile library that lets you manage, and deploy applications to, a variety of application servers. In this article, we look at how to use Cargo with Maven. If you are starting from scratch, you can use an Archetype to create a Cargo-enabled web application: mvn archetype:create -DarchetypeGroupId=org.codehaus.cargo -DarchetypeArtifactId=cargo-archetype-webapp-single-module -DgroupId=com.wakaleo -DartifactId=ezbank Or it is easy to add to an existing configuration - just add the cargo-maven2-plugin to your pom file. The default configuration will deploy the application to an embedded Jetty server: org.codehaus.cargo cargo-maven2-plugin 1.0 Then just run mvn cargo:start. However Cargo is designed for deployment, and does not support rapid lifecycle development - use the ordinary Jetty plugin for that. Deploying to a Tomcat instance You can run your integration tests against a Tomcat server that Cargo will initialize and configure for the occasion - this is referred to as 'standalone' mode: org.codehaus.cargo cargo-maven2-plugin 1.0 tomcat6x /usr/local/apache-tomcat-6.0.18 standalone target/tomcat6x Cargo will create a base directory (think CATALINA_BASE) in a directory that you specify. It will use the Tomcat home directory that you provide. At each installation, Cargo will destroy and recreate the base directory. You can also download and install a Tomcat installation as required using the element: http://www.orionserver.com/distributions/orion2.0.5.zip ${java.io.tmpdir}/cargoinstalls This is a more portable solution which is useful for integration tests Running integration tests with Cargo You can use Cargo to automatically start up a web server to run your integration tests. This means you can run your integration tests on any of the supported servers (Tomcat, Jetty, JBoss, Weblogic,...): org.codehaus.cargo cargo-maven2-plugin 1.0 start-container pre-integration-test start stop-container post-integration-test stop false tomcat6x /usr/local/apache-tomcat-6.0.18 standalone target/tomcat6x Deploying to an existing server You can also deploy to a running application server. You need to use the 'existing' configuration type (existing). You can use a separate profile to run the integration tests in a standalone instance and then deploy to a running instance. integration org.codehaus.cargo cargo-maven2-plugin 1.0 tomcat6x existing /usr/local/apache-tomcat-6.0.18 ... Then you can deploy your application as shown here: $ mvn install $ mvn cargo:deploy -Pintegration Deploying to a remote server You can also deploy to a remote server, using the server-specific remote API (e.g. the HTML manager application for Tomcat). You need to set up a container of type 'remote' and a configuration of type 'runtime': tomcat6x remote runtime admin http://localhost:8888/manager ... In the section, you define server-specific properties (see the Cargo documentation). Then you use Cargo as usual: $ mvn cargo:redeploy -o ... [INFO] [cargo:redeploy] [INFO] [mcat6xRemoteDeployer] Redeploying [/Users/johnsmart/.m2/repository/org/ebank/ ebank-web/1.0.0-SNAPSHOT/ebank-web-1.0.0-SNAPSHOT.war] [INFO] [mcat6xRemoteDeployer] Undeploying [/Users/johnsmart/.m2/repository/org/ebank/ ebank-web/1.0.0-SNAPSHOT/ebank-web-1.0.0-SNAPSHOT.war] [INFO] [mcat6xRemoteDeployer] Deploying [/Users/johnsmart/.m2/repository/org/ebank/ ebank-web/1.0.0-SNAPSHOT/ebank-web-1.0.0-SNAPSHOT.war] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4 seconds [INFO] Finished at: Fri Jul 17 17:45:34 CEST 2009 [INFO] Final Memory: 6M/12M [INFO] ------------------------------------------------------------------------ Using a dedicated deployer module You can dissociate the build process from the application deployment process by creating a separate Maven module dedicated to deployments. This also makes it easier to build and deploy your WAR file to Nexus on one server, and then deploy to your application server directly on the target machine. To do this, you create a dedicated Maven module. It only needs to contain the Cargo plugin and a dependency on the application to be deployed. The Cargo plugin uses the section to obtain the WAR file to be deployed from your Nexus repository. ... org.codehaus.cargo cargo-maven2-plugin 1.0 tomcat6x existing /usr/local/apache-tomcat-6.0.18 ebank-web org.ebank war The dependencies section contains a reference to the WAR file to be deployed. You can use a property here so that you can pass a version number from the command line: ... org.ebank ebank-web war ${target.version} ${project.version} From http://weblogs.java.net/blog/johnsmart
December 29, 2009
· 38,132 Views · 1 Like
article thumbnail
Data-driven tests With JUnit 4 and Excel
One nice feature in JUnit 4 is that of Parameterized Tests, which let you do data-driven testing in JUnit with a minimum of fuss. It's easy enough, and very useful, to set up basic data-driven tests by defining your test data directly in your Java class. But what if you want to get your test data from somewhere else? In this article, we look at how to obtain test data from an Excel spreadsheet. Parameterized tests allow data-driven tests in JUnit. That is, rather than having different of test cases that explore various aspects of your class's (or your application's) behavior, you define sets of input parameters and expected results, and test how your application (or, more often, one particular component) behaves. Data-driven tests are great for applications involving calculations, for testing ranges, boundary conditions and corner cases. In JUnit, a typical parameterized test might look like this: @RunWith(Parameterized.class) public class PremiumTweetsServiceTest { private int numberOfTweets; private double expectedFee; @Parameters public static Collection data() { return Arrays.asList(new Object[][] { { 0, 0.00 }, { 50, 5.00 }, { 99, 9.90 }, { 100, 10.00 }, { 101, 10.08 }, { 200, 18}, { 499, 41.92 }, { 500, 42 }, { 501, 42.05 }, { 1000, 67 }, { 10000, 517 }, }); } public PremiumTweetsServiceTest(int numberOfTweets, double expectedFee) { super(); this.numberOfTweets = numberOfTweets; this.expectedFee = expectedFee; } @Test public void shouldCalculateCorrectFee() { PremiumTweetsService premiumTweetsService = new PremiumTweetsService(); double calculatedFees = premiumTweetsService.calculateFeesDue(numberOfTweets); assertThat(calculatedFees, is(expectedFee)); } } The test class has member variables that correspond to input values (numberOfTweets) and expected results (expectedFee). The @RunWith(Parameterzed.class) annotation gets JUnit to inject your test data into instances of your test class, via the constructor. The test data is provided by a method with the @Parameters annotation. This method needs to return a collection of arrays, but beyond that you can implement it however you want. In the above example, we just create an embedded array in the Java code. However, you can also get it from other sources. To illustrate this point, I wrote a simple class that reads in an Excel spreadsheet and provides the data in it in this form: @RunWith(Parameterized.class) public class DataDrivenTestsWithSpreadsheetTest { private double a; private double b; private double aTimesB; @Parameters public static Collection spreadsheetData() throws IOException { InputStream spreadsheet = new FileInputStream("src/test/resources/aTimesB.xls"); return new SpreadsheetData(spreadsheet).getData(); } public DataDrivenTestsWithSpreadsheetTest(double a, double b, double aTimesB) { super(); this.a = a; this.b = b; this.aTimesB = aTimesB; } @Test public void shouldCalculateATimesB() { double calculatedValue = a * b; assertThat(calculatedValue, is(aTimesB)); } } The Excel spreadsheet contains multiplication tables in three columns: The SpreadsheetData class uses the Apache POI project to load data from an Excel spreadsheet and transform it into a list of Object arrays compatible with the @Parameters annotation. I've placed the source code, complete with unit-test examples on BitBucket. For the curious, the SpreadsheetData class is shown here: public class SpreadsheetData { private transient Collection data = null; public SpreadsheetData(final InputStream excelInputStream) throws IOException { this.data = loadFromSpreadsheet(excelInputStream); } public Collection getData() { return data; } private Collection loadFromSpreadsheet(final InputStream excelFile) throws IOException { HSSFWorkbook workbook = new HSSFWorkbook(excelFile); data = new ArrayList(); Sheet sheet = workbook.getSheetAt(0); int numberOfColumns = countNonEmptyColumns(sheet); List rows = new ArrayList(); List rowData = new ArrayList(); for (Row row : sheet) { if (isEmpty(row)) { break; } else { rowData.clear(); for (int column = 0; column < numberOfColumns; column++) { Cell cell = row.getCell(column); rowData.add(objectFrom(workbook, cell)); } rows.add(rowData.toArray()); } } return rows; } private boolean isEmpty(final Row row) { Cell firstCell = row.getCell(0); boolean rowIsEmpty = (firstCell == null) || (firstCell.getCellType() == Cell.CELL_TYPE_BLANK); return rowIsEmpty; } /** * Count the number of columns, using the number of non-empty cells in the * first row. */ private int countNonEmptyColumns(final Sheet sheet) { Row firstRow = sheet.getRow(0); return firstEmptyCellPosition(firstRow); } private int firstEmptyCellPosition(final Row cells) { int columnCount = 0; for (Cell cell : cells) { if (cell.getCellType() == Cell.CELL_TYPE_BLANK) { break; } columnCount++; } return columnCount; } private Object objectFrom(final HSSFWorkbook workbook, final Cell cell) { Object cellValue = null; if (cell.getCellType() == Cell.CELL_TYPE_STRING) { cellValue = cell.getRichStringCellValue().getString(); } else if (cell.getCellType() == Cell.CELL_TYPE_NUMERIC) { cellValue = getNumericCellValue(cell); } else if (cell.getCellType() == Cell.CELL_TYPE_BOOLEAN) { cellValue = cell.getBooleanCellValue(); } else if (cell.getCellType() ==Cell.CELL_TYPE_FORMULA) { cellValue = evaluateCellFormula(workbook, cell); } return cellValue; } private Object getNumericCellValue(final Cell cell) { Object cellValue; if (DateUtil.isCellDateFormatted(cell)) { cellValue = new Date(cell.getDateCellValue().getTime()); } else { cellValue = cell.getNumericCellValue(); } return cellValue; } private Object evaluateCellFormula(final HSSFWorkbook workbook, final Cell cell) { FormulaEvaluator evaluator = workbook.getCreationHelper() .createFormulaEvaluator(); CellValue cellValue = evaluator.evaluate(cell); Object result = null; if (cellValue.getCellType() == Cell.CELL_TYPE_BOOLEAN) { result = cellValue.getBooleanValue(); } else if (cellValue.getCellType() == Cell.CELL_TYPE_NUMERIC) { result = cellValue.getNumberValue(); } else if (cellValue.getCellType() == Cell.CELL_TYPE_STRING) { result = cellValue.getStringValue(); } return result; } } Data-driven testing is a great way to test calculation-based applications more thoroughly. In a real-world application, this Excel spreadsheet could be provided by the client or the end-user with the business logic encoded within the spreadsheet. (The POI library handles numerical calculations just fine, though it seems to have a bit of trouble with calculations using dates). In this scenario, the Excel spreadsheet becomes part of your acceptance tests, and helps to define your requirements, allows effective test-driven development of the code itself, and also acts as part of your acceptance tests. From http://weblogs.java.net/blog/johnsmart
November 30, 2009
· 42,507 Views · 1 Like

Comments

Where Have All the Flexible Designs Gone?

Aug 22, 2014 · Mr B Loid

I've seem teams over-invest in unit tests, but usually when they think of them as tests, which in my opinion is not an effective way of working. And I absolutely agree, tests can be badly written, in which case they loose a lot of their value.

For these reasons, I prefer to think in terms of executable specifications, starting from high level, often end-to-end automated acceptance criteria (which play the role of integration tests), and working down to more detailed specifications for low level components. Whether it is implemented as a unit or an integration test is not that important to me, unless it adversely affects the time it takes to run (and hence the feedback cycle). The quality of the tests, and just as importantly, their relevance, is built in: if I don't know what a unit test does, I consider that I might as well delete it because if it fails, I won't know what to do. So the way the tests are named, how they are organized, and how they are written, is critical. Tests written this way are good at finding regressions, but they also document your thought process when you wrote them.

Written this way, these tests most definitely do help me sleep better ;-).

Where Have All the Flexible Designs Gone?

Aug 21, 2014 · Mr B Loid

The key word here is "knowingly": the habit I am referring to here is when developers knowingly commit code that will break the build, because some of the acceptance tests are still "work-in-progress".

Is Self Submission Really that Bad?

Oct 17, 2012 · admin

You probably should stick to Hamcrest 1.1, rather than 1.2.1 or 1.3 - the versions after 1.1 are buggy for non-trivial Hamcrest expressions.
What's up with the JUnit and Hamcrest Dependencies?

Oct 17, 2012 · James Sugrue

You probably should stick to Hamcrest 1.1, rather than 1.2.1 or 1.3 - the versions after 1.1 are buggy for non-trivial Hamcrest expressions.
FREE ebook - Jenkins: The Definitive Guide, Continuous integration for the masses

Jul 23, 2011 · Dev Stonez

Towards the bottom of the page.
Development Bonanza Brings New Ruby, PHP, Java Tools - Software

Jul 19, 2011 · Lebon Bon Lebon

Fair point, and I totally agree with the danger of going too far and too deep with high-level functional tests (including implementing the details too soon when the UI is not yet stable) instead of more comprehensive unit tests.
Development Bonanza Brings New Ruby, PHP, Java Tools - Software

Jul 17, 2011 · Lebon Bon Lebon

Yes and no. For most apps, UI tests are too fragile to implement first (I have seen exceptions to this rule). However, if you use an ATDD/BDD approach, you would automate a set of pending high-level functional tests before doing any code, and implement the tests once the UI is reasonably stable. Note that I'm talking about high-level, specification-by-example style tests, which shouldn't go into implementation details but remain very high level (in other words, focus on 'what', not 'how'). This gives you your goal posts for the story and even for an iteration, and helps massively with team communication. It also gives you the flexibility to implement the UI as best you see fit, without compromising the tests. Gojko Adzic's 'Specification By Example' book is a good introduction to this approach.
Anonymous Classes: Java's Synthetic Closure

Mar 13, 2011 · Gerd Storm

You can copy artifacts from one build to another quite easily with the Copy Artfacts. I discuss this technique in some detail in the next chapter of 'Jenkins: The Definitive Guide', which should be available shortly.
Anonymous Classes: Java's Synthetic Closure

Mar 13, 2011 · Gerd Storm

You can copy artifacts from one build to another quite easily with the Copy Artfacts. I discuss this technique in some detail in the next chapter of 'Jenkins: The Definitive Guide', which should be available shortly.
Anonymous Classes: Java's Synthetic Closure

Mar 13, 2011 · Gerd Storm

You can copy artifacts from one build to another quite easily with the Copy Artfacts. I discuss this technique in some detail in the next chapter of 'Jenkins: The Definitive Guide', which should be available shortly.
PHP backup of a mysql database

Aug 20, 2010 · Andrea Ingaglio

Actually, that's Maven - Maven only guarantees to be able to compare version numbers in the standard 3-number format (0.9.0 is, but 0.9.1.2 isn't; 3.0.4-RELEASE would have been, but 3.0.4.RELEASE isn't). That's why the plugin behaves strangely with version numbers like this.
Silverlight Install Modes

May 12, 2010 · Tony Thomas

Session ID: S312977 Session Title: Getting More from Your CI Server: Taking Hudson to the Next Level
Java Build Tools: Ant vs. Maven

Jan 03, 2010 · Joseph Bradington

What's the big deal with downloading core components once (they are tied to the version of Maven you are using) on an as-required basis? No one complains about apt-get downloading half the internet when you install something on Ubuntu.
Java Build Tools: Ant vs. Maven

Jan 03, 2010 · Joseph Bradington

The Maven web site is a bit cluttered in places, but the free online book ('Maven - The Definitive Guide - http://www.sonatype.com/books/maven-book/reference) is excellent.
Generics puzzler - array construction

Nov 25, 2009 · Alex Miller

PowerMock is indeed a great tool for this sort of thing - I was reserving some examples of how to use it for a future article ;-)
Opening Up the Delphi Field Test

Oct 21, 2009 · Mr B Loid

Indeed, when using mocks, beware of interaction tests that bind you too tightly to the implementation. State-based testing (using stubs to isolate the class being tested from its collaborators) is generally safer and more robust. That said, there are cases where interaction-based testing is useful - you just have to use them with moderation.
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 15, 2009 · Mr B Loid

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?
Testing for errant network connections

Oct 14, 2009 · Mr B Loid

I think the LOC issue is missing the bigger picture. It is fairly well observed that one of the issues with traditional development practices is that you end up writing features (and code) that are not used and/or not requested by the end user, and at the same time miss important features or scenarios that the users really do need (whether they mentioned them or not - important corner cases, for example, that show up as bugs afterwards). The refactoring in TDD shouldn't be underestimated, either. In TDD, you may well write less code, but you are much more likely to write the correct code, and in a way that is much better designed and easier to maintain. Again, maintenance costs are not visible here, but are very, very real, and significantly higher (by orders of magnitude, I suspect), for non-TDD code.
Testing for errant network connections

Oct 12, 2009 · Mr B Loid

Be careful not to confuse LOC with productivity. In my experience (and, visibly, that of others as well), non-TDD development is much less focused, and results in (a) unnecessary (and ultimately unused) code, and (b) code that is less consise (probably due to the emphasis on refactoring in TDD). If you apply TDD from the outset, you actually don't have to "go through hoops", on the contrary it's quite a natural and fluid process.

That said, the extra time taken by the TDD teams suprises me a little - I suspect the fact that these teams were largely new to TDD may have contributed. The IBM approach to TDD, if you read the details, seems a little on the rigid side (UML and Sequence Diagrams for initial design, a test per method rather than a more BDD-style approach...). The Microsoft approach seems somewhat sub-optimal as well (the testing framework was run from the command line, and the resulting log files analysed...). So, these approaches to TDD are better than nothing. but far from an example of TDD at its best. From personal experience, I find that TDD, when done well, does not significantly add to development time - in fact, there are many cases where developers can easily get bogged down when not using TDD. For experienced TDDers, TDD helps maintain a smooth flow of development which is, at the end of the day, very productive.

Also, don't forget that it is very easy to write lots of lines of code quickly if you remove the constraint that it has to work - how much more time did it take to fix the extra bugs in the non-TDD code? Between 2.5 and 10 times as many bugs in the non-TDD code, that's quite a bit of fixing, especially if it's done some time after the code has been written.

Thanks for your comments!

What's Wrong with the For Loop

Oct 10, 2009 · Mr B Loid

Sound's cool - I'll check it out
Making Magic with .htaccess Files

Apr 16, 2009 · Michael Bernat

I think you meant Homo Sapiens, not Homo Erectus (who lived around 2 million years ago) :-).
Making Magic with .htaccess Files

Apr 16, 2009 · Michael Bernat

I think you meant Homo Sapiens, not Homo Erectus (who lived around 2 million years ago) :-).
Making Magic with .htaccess Files

Apr 16, 2009 · Michael Bernat

I think you meant Homo Sapiens, not Homo Erectus (who lived around 2 million years ago) :-).
Making Magic with .htaccess Files

Apr 16, 2009 · Michael Bernat

I think you meant Homo Sapiens, not Homo Erectus (who lived around 2 million years ago) :-).
Python Tutorial Index Page

Feb 02, 2009 · royans tharakan

True, running all the tests before check-in is a recommended practice. But, from what I understand, JUnitMax does run all of the tests, it just runs the ones it thinks are the most relevent first.
Python Tutorial Index Page

Feb 02, 2009 · royans tharakan

True, running all the tests before check-in is a recommended practice. But, from what I understand, JUnitMax does run all of the tests, it just runs the ones it thinks are the most relevent first.
Python Tutorial Index Page

Feb 02, 2009 · royans tharakan

True, running all the tests before check-in is a recommended practice. But, from what I understand, JUnitMax does run all of the tests, it just runs the ones it thinks are the most relevent first.
Python Tutorial Index Page

Feb 02, 2009 · royans tharakan

True, running all the tests before check-in is a recommended practice. But, from what I understand, JUnitMax does run all of the tests, it just runs the ones it thinks are the most relevent first.
Python Tutorial Index Page

Feb 02, 2009 · royans tharakan

This is a tricky one. Teamcity and Pulse have the idea of "personal builds", but the implementation is a tad too intrusive for my liking (commits can only be done from the IDE and must go through the TeamCity server, for example). A more generic approach that I am investigating myself is to use development and integration branches in Subversion, along with the Subversion 1.5 merge features to "promote" changes automatically from the development branch to the integration branch if (and only if) the build succeeds. Still fairly experimental, though.

How Microsoft uses scripting to test math functionality in Mac Excel

Jan 05, 2009 · admin

Hi Steven,

The 2008 Java Power Tools bootcamps were indeed well suited to organisations trying to get started with techniques like build automation, TDD and CI. The course is quite flexible, however, and the content and level of detail varies from session to session depending on the requirements and preferences of the students. The course is great for shops that still have little or no (or possibly out-of-date) built automation strategies in place (and there are more organisations of this type out there than you might think!). But I've also given the course to organisations and students who are more familiar with the basic concepts and many of the techniques, but who want to get up to speed in other areas (such as Maven 2, BDD, or distributed CI strategies), or who want to get a well-rounded picture of the state of the art in build automation, TDD, BDD, CI, and so forth.

One of the great things about the new 5-day format is that it gives more time to cover more advanced topics such as automated release and deployment strategies with Maven 2, Nexus and Hudson, advanced multi-module Maven projects, distributed builds, and more advanced TDD and BDD testing strategies.

In the new version of the Bootcampls, we cover unit and integration testing in Groovy and with easyb in a fair bit of detail, including web testing (selenium), database testing (dbunit) using Groovy and easyb, and web service testing using SoapUI.

We also cover more advanced CI integration strategies, including scaling CI, using CI with multiple SCM branches, automating deployment to different environments, coordinating releases with CI, integrating with tools like trac and JIRA, and so on.

I'm really looking forward to this season of bootcamps - I think it will be a lot of fun, and give enough time to both cover the basics and still get into some of the more advanced topics, or, for more advanced students, skim over the basics and concentrate on the advanced material in detail.

How Microsoft uses scripting to test math functionality in Mac Excel

Jan 03, 2009 · admin

Hi Gian,

My best wishes for 2009 to you too!

The plans for the Europe/UK are still being made, but at this stage they will probably be towards the middle of the year (June or early July). There may be others as well - I'll publish more news as plans progress!

John.

Mongrel 0.3.13.4 for UNIX released

Jun 16, 2008 · Dieter Komendera

Nice post. I admit that EJB 3 has made huge progress since the bad old days of EJB 2.x. And the competition between EJB 3 and Spring/Hibernate is IMO very healthy for the industry as a whole. Annotations in Spring 3.x have similarly made Spring a whole lot nicer to use than with a whole lot of cumbersome XML configuration files. However, in a Spring 3 application, I can write both unit (tests in isolation, possibly using mocks) and integration (classes deployed in the Spring context (or "container", if you prefer) tests directly in JUnit, without having to deploy anything anywhere. This has always been one of my biggest issues with EJBs. How possible will this be in EJB 3.1?
Pure Ajax creates the next generation legacy applications

Feb 27, 2008 · Harshad Oak

True, the Continuum 1.1 permission schema is very nice.
Agile Ajax: More Killer App - OpenRecord, a Wiki/Spreadsheet

Feb 26, 2008 · Dietrich Kappe

You can vote for any product as long as it's Hudson ;-) . Seriously, though, that glitch seems to be fixed now.
Agile Ajax: More Killer App - OpenRecord, a Wiki/Spreadsheet

Feb 26, 2008 · Dietrich Kappe

You can vote for any product as long as it's Hudson ;-) . Seriously, though, that glitch seems to be fixed now.
Agile Ajax: More Killer App - OpenRecord, a Wiki/Spreadsheet

Feb 25, 2008 · Dietrich Kappe

Thanks for the precision, Slava.
Agile Ajax: More Killer App - OpenRecord, a Wiki/Spreadsheet

Feb 25, 2008 · Dietrich Kappe

Thanks for the precision, Slava.
Agile Ajax: More Killer App - OpenRecord, a Wiki/Spreadsheet

Feb 25, 2008 · Dietrich Kappe

Thanks for the precision, Slava.
Overcoming Procrastination

Feb 21, 2008 · Geertjan Wielenga

Hmmm, the best thing about procrastination is that you can always put it off till later. ,
AJAX Inline Dictionary like WallStreetJournal.com

Feb 18, 2008 · Chevol Davis

Hmmm, I'll be watching this space with great interest...
Moving from software coder to developer

Feb 04, 2008 · Andrew Forward

Hmmm, seen that done in some places: a bunch of functionnally unrelated applications bundled together in the same source tree because they were deployed on the same site. The result was terribly slow builds (admittedly, CVS tagging also had a lot to do), a complicated build script, and a bloated testing process because testers were convinced that, since the whole application had been rebuilt, everything needed to be retested.
Moving from software coder to developer

Feb 04, 2008 · Andrew Forward

Hmmm, seen that done in some places: a bunch of functionnally unrelated applications bundled together in the same source tree because they were deployed on the same site. The result was terribly slow builds (admittedly, CVS tagging also had a lot to do), a complicated build script, and a bloated testing process because testers were convinced that, since the whole application had been rebuilt, everything needed to be retested.
Moving from software coder to developer

Feb 04, 2008 · Andrew Forward

Hmmm, seen that done in some places: a bunch of functionnally unrelated applications bundled together in the same source tree because they were deployed on the same site. The result was terribly slow builds (admittedly, CVS tagging also had a lot to do), a complicated build script, and a bloated testing process because testers were convinced that, since the whole application had been rebuilt, everything needed to be retested.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: