In this article, excerpted from the book Docker in Action, I will show you how to open access to shared memory between containers. Linux provides a few tools for sharing memory between processes running on the same computer. This form of inter-process communication (IPC) performs at memory speeds. It is often used when the latency associated with network or pipe based IPC drags software performance below requirements. The best examples of shared memory based IPC usage is in scientific computing and some popular database technologies like PostgreSQL. Docker creates a unique IPC namespace for each container by default. The Linux IPC namespace partitions shared memory primitives like named shared memory blocks and semaphores, as well as message queues. It is okay if you are not sure what these are. Just know that they are tools used by Linux programs to coordinate processing. The IPC namespace prevents processes in one container from accessing the memory on the host or in other containers. Sharing IPC Primitives Between Containers I’ve created an image named allingeek/ch6_ipc that contains both a producer and consumer. They communicate using shared memory. Listing 1 will help you understand the problem with running these in separate containers. Listing 1: Launch a Communicating Pair of Programs # start a producer docker -d -u nobody --name ch6_ipc_producer \ allingeek/ch6_ipc -producer # start the consumer docker -d -u nobody --name ch6_ipc_consumer \ allingeek/ch6_ipc -consumer Listing 1 starts two containers. The first creates a message queue and starts broadcasting messages on it. The second should pull from the message queue and write the messages to the logs. You can see what each is doing by using the following commands to inspect the logs of each: docker logs ch6_ipc_producer docker logs ch6_ipc_consumer If you executed the commands in Listing 1 something should be wrong. The consumer never sees any messages on the queue. Each process used the same key to identify the shared memory resource but they referred to different memory. The reason is that each container has its own shared memory namespace. If you need to run programs that communicate with shared memory in different containers, then you will need to join their IPC namespaces with the --ipc flag. The --ipc flag has a container mode that will create a new container in the same IPC namespace as another target container. Listing 2: Joining Shared Memory Namespaces # remove the original consumer docker rm -v ch6_ipc_consumer # start a new consumer with a joined IPC namespace docker -d --name ch6_ipc_consumer \ --ipc container:ch6_ipc_producer \ allingeek/ch6_ipc -consumer Listing 2 rebuilds the consumer container and reuses the IPC namespace of the ch6_ipc_producer container. This time the consumer should be able to access the same memory location where the server is writing. You can see this working by using the following commands to inspect the logs of each: docker logs ch6_ipc_producer docker logs ch6_ipc_consumer Remember to cleanup your running containers before moving on: # remember: the v option will clean up volumes, # the f option will kill the container if it is running, # and the rm command takes a list of containers docker rm -vf ch6_ipc_producer ch6_ipc_consumer There are obvious security implications to reusing the shared memory namespaces of containers. But this option is available if you need it. Sharing memory between containers is safer alternative to sharing memory with the host.
This technical tip explains how to add a watermark to a document in Microsoft Word document inside Android Applications. Sometimes you need to insert a watermark into a Word document, for instance if you would like to print a draft document or mark it as confidential. In Microsoft Word, you can quickly insert a watermark using the Insert Watermark command. Not many people using this command realize that such “watermark” is just a shape with text inserted into a header or footer and positioned in the centre of the page. While Aspose.Words doesn't have a single insert watermark command like Microsoft Word, it is very easy to insert any shape or image into a header or footer and thus create a watermark of any imaginable type. The code below inserts a watermark into a Word document. [Java Code Sample] package AddWatermark; import java.awt.Color; import java.io.File; import java.net.URI; import com.aspose.words.Document; import com.aspose.words.Shape; import com.aspose.words.ShapeType; import com.aspose.words.RelativeHorizontalPosition; import com.aspose.words.RelativeVerticalPosition; import com.aspose.words.WrapType; import com.aspose.words.VerticalAlignment; import com.aspose.words.HorizontalAlignment; import com.aspose.words.Paragraph; import com.aspose.words.Section; import com.aspose.words.HeaderFooterType; import com.aspose.words.HeaderFooter; public class Program { public static void main(String[] args) throws Exception { // Sample infrastructure. URI exeDir = Program.class.getResource("").toURI(); String dataDir = new File(exeDir.resolve("../../Data")) + File.separator; Document doc = new Document(dataDir + "TestFile.doc"); insertWatermarkText(doc, "CONFIDENTIAL"); doc.save(dataDir + "TestFile Out.doc"); } /** * Inserts a watermark into a document. * * @param doc The input document. * @param watermarkText Text of the watermark. */ private static void insertWatermarkText(Document doc, String watermarkText) throws Exception { // Create a watermark shape. This will be a WordArt shape. // You are free to try other shape types as watermarks. Shape watermark = new Shape(doc, ShapeType.TEXT_PLAIN_TEXT); // Set up the text of the watermark. watermark.getTextPath().setText(watermarkText); watermark.getTextPath().setFontFamily("Arial"); watermark.setWidth(500); watermark.setHeight(100); // Text will be directed from the bottom-left to the top-right corner. watermark.setRotation(-40); // Remove the following two lines if you need a solid black text. watermark.getFill().setColor(Color.GRAY); // Try LightGray to get more Word-style watermark watermark.setStrokeColor(Color.GRAY); // Try LightGray to get more Word-style watermark // Place the watermark in the page center. watermark.setRelativeHorizontalPosition(RelativeHorizontalPosition.PAGE); watermark.setRelativeVerticalPosition(RelativeVerticalPosition.PAGE); watermark.setWrapType(WrapType.NONE); watermark.setVerticalAlignment(VerticalAlignment.CENTER); watermark.setHorizontalAlignment(HorizontalAlignment.CENTER); // Create a new paragraph and append the watermark to this paragraph. Paragraph watermarkPara = new Paragraph(doc); watermarkPara.appendChild(watermark); // Insert the watermark into all headers of each document section. for (Section sect : doc.getSections()) { // There could be up to three different headers in each section, since we want // the watermark to appear on all pages, insert into all headers. insertWatermarkIntoHeader(watermarkPara, sect, HeaderFooterType.HEADER_PRIMARY); insertWatermarkIntoHeader(watermarkPara, sect, HeaderFooterType.HEADER_FIRST); insertWatermarkIntoHeader(watermarkPara, sect, HeaderFooterType.HEADER_EVEN); } } private static void insertWatermarkIntoHeader(Paragraph watermarkPara, Section sect, int headerType) throws Exception { HeaderFooter header = sect.getHeadersFooters().getByHeaderFooterType(headerType); if (header == null) { // There is no header of the specified type in the current section, create it. header = new HeaderFooter(sect.getDocument(), headerType); sect.getHeadersFooters().add(header); } // Insert a clone of the watermark into the header. header.appendChild(watermarkPara.deepClone(true)); } }
The loop is the classic way of processing collections, but with the greater adoption of first-class functions in programming languages the collection pipeline is an appealing alternative. In this article I look at refactoring loops to collection pipelines with a series of small examples. I'm publishing this article in installments. This adds an example of refactoring a loop that summarizes flight delay data for each destination airport. A common task in programming is processing a list of objects. Most programmers naturally do this with a loop, as it's one of the basic control structures we learn with our very first programs. But loops aren't the only way to represent list processing, and in recent years more people are making use of another approach, which I call the collection pipeline. This style is often considered to be part of functional programming, but I used it heavily in Smalltalk. As OO languages support lambdas and libraries that make first class functions easier to program with, then collection pipelines become an appealing choice. Refactoring a Simple Loop into a Pipeline I'll start with a simple example of a loop and show the basic way I refactor one into a collection pipeline. Let's imagine we have a list of authors, each of which has the following data structure. class Author... public string Name { get; set; } public string TwitterHandle { get; set;} public string Company { get; set;} This example uses C# Here is the loop. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); foreach (Author a in authors) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } My first step in refactoring a loop into a collection pipeline is to apply Extract Variable on the loop collection. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors; foreach (Author a in loopStart) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } This variable gives me a starting point for pipeline operations. I don't have a good name for it right now, so I'll use one that makes sense for the moment, expecting to rename it later. I then start looking at bits of behavior in the loop. The first thing I see is a conditional check, I can move this to the pipeline with a . class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company); foreach (Author a in loopStart) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } I see the next part of the loop operates on the twitter handle, rather than the author, so I can use a a . class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle); foreach (string handle in loopStart) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } return result; } Next in the loop as another conditional, which again I can move to a filter operation. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); foreach (string handle in loopStart) { if (handle != null) result.Add(handle); } return result; } All the loop now does is add everything in its loop collection into the result collection, so I can remove the loop and just return the pipeline result. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); return authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); foreach (string handle in loopStart) { result.Add(handle); } return result; } Here's the final state of the code class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); } What I like about collection pipelines is that I can see the flow of logic as the elements of the list pass through the pipeline. For me it reads very closely to how I'd define the outcome of the loop "take the authors, choose those who have a company, and get their twitter handles removing any null handles". Furthermore, this style of code is familiar even in different languages who have different syntaxes and different names for pipeline operators. Java public List twitterHandles(List authors, String company) { return authors.stream() .filter(a -> a.getCompany().equals(company)) .map(a -> a.getTwitterHandle()) .filter(h -> null != h) .collect(toList()); } Ruby def twitter_handles authors, company authors .select {|a| company == a.company} .map {|a| a.twitter_handle} .reject {|h| h.nil?} end while this matches the other examples, I would replace the final reject with compact Clojure (defn twitter-handles [authors company] (->> authors (filter #(= company (:company %))) (map :twitter-handle) (remove nil?))) F# let twitterHandles (authors : seq, company : string) = authors |> Seq.filter(fun a -> a.Company = company) |> Seq.map(fun a -> a.TwitterHandle) |> Seq.choose (fun h -> h) again, if I wasn't concerned about matching the structure of the other examples I would combine the map and choose into a single step I've found that once I got used to thinking in terms of pipelines I could apply them quickly even in an unfamiliar language. Since the fundamental approach is the same it's relatively easy to translate from even unfamiliar syntax and function names. Refactoring within the Pipeline, and to a Comprehension Once you have some behavior expressed as a pipeline, there are potential refactorings you can do by reordering steps in the pipeline. One such move is that if you have a map followed by a filter, you can usually move the filter before the map like this. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company) .Where (a => a.TwitterHandle != null) .Select(a => a.TwitterHandle); } When you have two adjacent filters, you can combine them using a conjunction. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company && a.TwitterHandle != null) .Select(a => a.TwitterHandle); } Once I have a C# collection pipeline in the form of a simple filter and map like this, I can replace it with a Linq expression class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return from a in authors where a.Company == company && a.TwitterHandle != null select a.TwitterHandle; } I consider Linq expressions to be a form of , and similarly you can do something like this with any language that supports list comprehensions. It's a matter of taste whether you prefer the list comprehension form, or the pipeline form (I prefer pipelines). In general pipelines are more powerful, in that you can't refactor all pipelines into comprehensions.
This example is an improved version of my previous example Android GridView Example. Instead of using static images to display the grid items, let's make this example more realistic by downloading the data in real-time from the server and rendering the grid items. The following video depicts the output of this example. Without wasting much time, let us jump straight into what it takes to build this kind of GridView. You need to follow the following steps to complete this example. 1. Add GridView in Activity Layout First, create a new android project. For this example, I prefer to use Android Studio. Create a new layout file to your project res/layout folder and name it as activity_grid_view.xml. And add the following code blocks. The above layout is pretty straightforward. We have declared an GridView and a ProgressBar in activity layout. The progress bar will be displayed when the data is downloaded. 2. Declare GridView Item Layout Let us now add another file named grid_item_layout.xml to res/layout folder. This layout will be used by a custom grid adapter for laying out individual grid items. For the sake of simplicity, we are adding an ImageView and a TextView. 3. Adding Internet Permission You might be aware that, the Android application must declare all the permissions that are required for the application. As we need to download the data from the server, we need to add INTERNET permission. Add the following line to AndroidManifest.xml the file. Notice that we have also declared all the activities used in the application. 4. Adding Picasso Image Downloading Library Android open-source developer community brings some interesting libraries that can be integrated easily into Android applications. They serve a great deal of purpose and save a lot of time. Here in this example, I am talking about Picasso the image-loading library. We will add the Picasso library for downloading and caching images. Visit here to learn more about how to use the Picasso library on Android. You can add the Picasso library by adding the following dependency to the build.gradle file. dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile 'com.android.support:appcompat-v7:21.0.3' compile 'com.squareup.picasso:picasso:2.5.2' } 5. Create a GridView Custom Adapter A grid view is an adapter view. It requires an adapter to render the collection of data items. Add a new class named GridViewAdapter.java to your project and add the following code snippets. package com.javatechig.gridviewexample; import java.util.ArrayList; import android.app.Activity; import android.content.Context; import android.text.Html; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.ImageView; import android.widget.TextView; import com.squareup.picasso.Picasso; public class GridViewAdapter extends ArrayAdapter { private Context mContext; private int layoutResourceId; private ArrayList mGridData = new ArrayList(); public GridViewAdapter(Context mContext, int layoutResourceId, ArrayList mGridData) { super(mContext, layoutResourceId, mGridData); this.layoutResourceId = layoutResourceId; this.mContext = mContext; this.mGridData = mGridData; } /** * Updates grid data and refresh grid items. * @param mGridData */ public void setGridData(ArrayList mGridData) { this.mGridData = mGridData; notifyDataSetChanged(); } @Override public View getView(int position, View convertView, ViewGroup parent) { View row = convertView; ViewHolder holder; if (row == null) { LayoutInflater inflater = ((Activity) mContext).getLayoutInflater(); row = inflater.inflate(layoutResourceId, parent, false); holder = new ViewHolder(); holder.titleTextView = (TextView) row.findViewById(R.id.grid_item_title); holder.imageView = (ImageView) row.findViewById(R.id.grid_item_image); row.setTag(holder); } else { holder = (ViewHolder) row.getTag(); } GridItem item = mGridData.get(position); holder.titleTextView.setText(Html.fromHtml(item.getTitle())); Picasso.with(mContext).load(item.getImage()).into(holder.imageView); return row; } static class ViewHolder { TextView titleTextView; ImageView imageView; } } Notice the following in the above code snippets, The setGridData() method updates the data display on GridView. The Picasso.with().load() the method is used to download the image from the URL and display it on the image view. The GridViewAdapter class constructor requires the id of the grid item layout and the list of data to operate on. You might be surprised, where the GridItem class came from. It's not magic, we need to add GridItem.java class to our project. The GridItem class looks as follows. 6. Download Data and Hook it to the Activity Now we will be heading towards hooking the adapter to GridView and making it functional. Create a new Java class and name it as GridViewActivity.java and perform the following steps. Override the onCreate() method and set the layout by calling setContentView() method Initialize the GridView and ProgressBar components by using their declared layout id. Initialize the CustomGridView adapter bypassing the grid row layout id and the list of GridItem objects. Use AsyncTask to download data from the server, once the download is successful read the stream JSON response. Parse the JSON string into the list of GridItem objects. Once downloading and parsing is completed, in onPostExecute() callback update the UI elements. The following code does all the above steps as described. Add the following code to GridViewActivity class. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import android.content.Intent; import android.os.AsyncTask; import android.os.Bundle; import android.support.v7.app.ActionBarActivity; import android.util.Log; import android.view.View; import android.widget.AdapterView; import android.widget.GridView; import android.widget.ProgressBar; import android.widget.Toast; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; public class GridViewActivity extends ActionBarActivity { private static final String TAG = GridViewActivity.class.getSimpleName(); private GridView mGridView; private ProgressBar mProgressBar; private GridViewAdapter mGridAdapter; private ArrayList mGridData; private String FEED_URL = "http://javatechig.com/?json=get_recent_posts&count=45"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_gridview); mGridView = (GridView) findViewById(R.id.gridView); mProgressBar = (ProgressBar) findViewById(R.id.progressBar); //Initialize with empty data mGridData = new ArrayList<>(); mGridAdapter = new GridViewAdapter(this, R.layout.grid_item_layout, mGridData); mGridView.setAdapter(mGridAdapter); //Start download new AsyncHttpTask().execute(FEED_URL); mProgressBar.setVisibility(View.VISIBLE); } //Downloading data asynchronously public class AsyncHttpTask extends AsyncTask { @Override protected Integer doInBackground(String... params) { Integer result = 0; try { // Create Apache HttpClient HttpClient httpclient = new DefaultHttpClient(); HttpResponse httpResponse = httpclient.execute(new HttpGet(params[0])); int statusCode = httpResponse.getStatusLine().getStatusCode(); // 200 represents HTTP OK if (statusCode == 200) { String response = streamToString(httpResponse.getEntity().getContent()); parseResult(response); result = 1; // Successful } else { result = 0; //"Failed } } catch (Exception e) { Log.d(TAG, e.getLocalizedMessage()); } return result; } @Override protected void onPostExecute(Integer result) { // Download complete. Let us update UI if (result == 1) { mGridAdapter.setGridData(mGridData); } else { Toast.makeText(GridViewActivity.this, "Failed to fetch data!", Toast.LENGTH_SHORT).show(); } mProgressBar.setVisibility(View.GONE); } } String streamToString(InputStream stream) throws IOException { BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(stream)); String line; String result = ""; while ((line = bufferedReader.readLine()) != null) { result += line; } // Close stream if (null != stream) { stream.close(); } return result; } /** * Parsing the feed results and get the list * @param result */ private void parseResult(String result) { try { JSONObject response = new JSONObject(result); JSONArray posts = response.optJSONArray("posts"); GridItem item; for (int i = 0; i < posts.length(); i++) { JSONObject post = posts.optJSONObject(i); String title = post.optString("title"); item = new GridItem(); item.setTitle(title); JSONArray attachments = post.getJSONArray("attachments"); if (null != attachments && attachments.length() > 0) { JSONObject attachment = attachments.getJSONObject(0); if (attachment != null) item.setImage(attachment.getString("url")); } mGridData.add(item); } } catch (JSONException e) { e.printStackTrace(); } } } At this point, you will be able to run the app and notice that the app will download the data from the server and display it on GridView. 7. Handle GridView Click Event Right now GridView is not responding to user clicks. Let us make it more functional by adding the following code. mGridView.setOnItemClickListener(new AdapterView.OnItemClickListener() { public void onItemClick(AdapterView parent, View v, int position, long id) { //Get item at position GridItem item = (GridItem) parent.getItemAtPosition(position); //Pass the image title and url to DetailsActivity Intent intent = new Intent(GridViewActivity.this, DetailsActivity.class); intent.putExtra("title", item.getTitle()); intent.putExtra("image", item.getImage()); //Start details activity startActivity(intent); } }); When a user clicks on a grid item, we will start another activity that displays the full-screen image. You can start one activity from another by calling startActivity() method. We need to pass the details of the item such as the title, and image URL for displaying it on DetailsActivity. 8. Create Details Activity Layout Add a new layout file to res/layout directory, and name it as activity_details_view.xml and add the following code snippets. 9. Completing the Details Activity The DetailsActivity retrieves the details passed from GridViewActivity and renders the details on the screen. Create a new class named DetailsActivity and add the following code snippets. package com.javatechig.gridviewexample; import android.os.Bundle; import android.support.v7.app.ActionBar; import android.support.v7.app.ActionBarActivity; import android.text.Html; import android.widget.ImageView; import android.widget.TextView; import com.squareup.picasso.Picasso; public class DetailsActivity extends ActionBarActivity { private TextView titleTextView; private ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_details_view); ActionBar actionBar = getSupportActionBar(); actionBar.hide(); String title = getIntent().getStringExtra("title"); String image = getIntent().getStringExtra("image"); titleTextView = (TextView) findViewById(R.id.title); imageView = (ImageView) findViewById(R.id.grid_item_image); titleTextView.setText(Html.fromHtml(title)); Picasso.with(this).load(image).into(imageView); } } 10. Download the Complete Example Download from GitHub. 11. Custom Activity Transition in GridView Continue reading in our next tutorial.
Mockito-Java8 is a set of Mockito add-ons leveraging Java 8 and lambda expressions to make mocking with Mockito even more compact. At the beginning of 2015 I gave my flash talk Java 8 brings power to testing! at GeeCON TDD 2015 and DevConf.cz 2015. In my speech using 4 examples I showed how Java 8 – namely lambda expressions – can simplify testing tools and testing in general. One of those tools was Mokcito. To not let my PoC code die on slides and to make it simply available for others I have released a small project with two, useful in specified case, Java 8 add-ons for Mockito. Quick introduction As a prerequisite, let's assume we have the following data structure: @Immutable class ShipSearchCriteria { int minimumRange; int numberOfPhasers; } and a class we want to stub/mock: public class TacticalStation { public int findNumberOfShipsInRangeByCriteria( ShipSearchCriteria searchCriteria) { ... } } The library provides two add-ons: Lambda matcher - allows to define matcher logic within a lambda expression. given(ts.findNumberOfShipsInRangeByCriteria( argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4); Argument Captor - Java 8 edition - allows to use `ArgumentCaptor` in a one line (here with AssertJ): verify(ts).findNumberOfShipsInRangeByCriteria( assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000))); Lambda matcher With a help of the static method argLambda a lambda matcher instance is created which can be used to define matcher logic within a lambda expression (here for stubbing). It could be especially useful when working with complex classes pass as an argument. @Test public void shouldAllowToUseLambdaInStubbing() { //given given(ts.findNumberOfShipsInRangeByCriteria( argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(1500, 2))).isEqualTo(4); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(700, 2))).isEqualTo(0); } In comparison the same logic implemented with a custom Answer in Java 7: @Test public void stubbingWithCustomAsnwerShouldBeLonger() { //old way //given given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(new Answer() { @Override public Integer answer(InvocationOnMock invocation) throws Throwable { Object[] args = invocation.getArguments(); ShipSearchCriteria criteria = (ShipSearchCriteria) args[0]; if (criteria.getMinimumRange() > 1000) { return 4; } else { return 0; } } }); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(1500, 2))).isEqualTo(4); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(700, 2))).isEqualTo(0); } Even Java 8 and less readable constructions don't help too much: @Test public void stubbingWithCustomAsnwerShouldBeLongerEvenAsLambda() { //old way //given given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(invocation -> { ShipSearchCriteria criteria = (ShipSearchCriteria) invocation.getArguments()[0]; return criteria.getMinimumRange() > 1000 ? 4 : 0; }); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(1500, 2))).isEqualTo(4); //expect assertThat(ts.findNumberOfShipsInRangeByCriteria( new ShipSearchCriteria(700, 2))).isEqualTo(0); } Argument Captor - Java 8 edition A static method assertArg creates an argument matcher which implementation internally uses ArgumentMatcher with an assertion provided in a lambda expression. The example below uses AssertJ to provide meaningful error message, but any assertions (like native from TestNG or JUnit) could be used (if really needed). This allows to have inlined ArgumentCaptor: @Test public void shouldAllowToUseAssertionInLambda() { //when ts.findNumberOfShipsInRangeByCriteria(searchCriteria); //then verify(ts).findNumberOfShipsInRangeByCriteria( assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000))); } In comparison to 3 lines in the classic way: @Test public void shouldAllowToUseArgumentCaptorInClassicWay() { //old way //when ts.findNumberOfShipsInRangeByCriteria(searchCriteria); //then ArgumentCaptor captor = ArgumentCaptor.forClass(ShipSearchCriteria.class); verify(ts).findNumberOfShipsInRangeByCriteria(captor.capture()); assertThat(captor.getValue().getMinimumRange()).isLessThan(2000); } Summary The presented add-ons were created as PoC for my conference speech, but should be fully functional and potentially useful in the specific cases. To use it in your project it is enough to use Mockito 1.10.x or 2.0.x-beta, add `mockito-java8` as a dependency and of course compile your project with Java 8+. More details are available on the project webpage: https://github.com/szpak/mockito-java8
Unit Testing mandates to test the unit in isolation. In order to achieve that, the general consensus is to design our classes in a decoupled way using DI. In this paradigm, whether using a framework or not, whether using compile-time or runtime compilation, object instantiation is the responsibility of dedicated factories. In particular, this means the new keyword should be used only in those factories. Sometimes, however, having a dedicated factory just doesn’t fit. This is the case when injecting an narrow-scope instance into a wider scope instance. A use-case I stumbled upon recently concerns event bus, code like this one: public class Sample { private EventBus eventBus; public Sample(EventBus eventBus) { this.eventBus = eventBus; } public void done() { Result result = computeResult() eventBus.post(new DoneEvent(result)); } private Result computeResult() { ... } } With a runtime DI framework – such as the Spring framework, and if the DoneEvent had no argument, this could be changed to a lookup method pattern. public void done() { eventBus.post(getDoneEvent()); } public abstract DoneEvent getDoneEvent(); Unfortunately, the argument just prevents us to use this nifty trick. And it cannot be done with runtime injection anyway. It doesn’t mean the done() method shouldn’t be tested, though. The problem is not only how to assert that when the method is called, a new DoneEvent is posted in the bus, but also check the wrapped result. Experienced software engineers probably know about the Mockito.any(Class) method. This could be used like this: public void doneShouldPostDoneEvent() { EventBus eventBus = Mockito.mock(EventBus.class); Sample sample = new Sample(eventBus); sample.done(); Mockito.verify(eventBus).post(Mockito.any(DoneEvent.class)); } In this case, we make sure an event of the right kind has been posted to the queue, but we are not sure what the result was. And if the result cannot be asserted, the confidence in the code decreases. Mockito to the rescue. Mockito provides captures, that act like placeholders for parameters. The above code can be changed like this: public void doneShouldPostDoneEventWithExpectedResult() { ArgumentCaptor captor = ArgumentCaptor.forClass(DoneEvent.class); EventBus eventBus = Mockito.mock(EventBus.class); Sample sample = new Sample(eventBus); sample.done(); Mockito.verify(eventBus).post(captor.capture()); DoneEvent event = captor.getCapture(); assertThat(event.getResult(), is(expectedResult)); } At line 2, we create a new ArgumentCaptor. At line 6, We replace any() usage with captor.capture() and the trick is done. The result is then captured by Mockito and available through captor.getCapture() at line 7. The final line – using Hamcrest, makes sure the result is the expected one.
[This article was written by Sveta Smirnova] Like any good, thus lazy, engineer I don’t like to start things manually. Creating directories, configuration files, specify paths, ports via command line is too boring. I wrote already how I survive in case when I need to start MySQL server (here). There is also the MySQL Sandbox which can be used for the same purpose. But what to do if you want to start Percona XtraDB Cluster this way? Fortunately we, at Percona, have engineers who created automation solution for starting PXC. This solution uses Docker. To explore it you need: Clone the pxc-docker repository:git clone https://github.com/percona/pxc-docker Install Docker Compose as described here cd pxc-docker/docker-bld Follow instructions from the README file: a) ./docker-gen.sh 5.6 (docker-gen.sh takes a PXC branch as argument, 5.6 is default, and it looks for it on github.com/percona/percona-xtradb-cluster) b) Optional: docker-compose build (if you see it is not updating with changes). c) docker-compose scale bootstrap=1 members=2 for a 3 node cluster Check which ports assigned to containers: $docker port dockerbld_bootstrap_1 3306 0.0.0.0:32768 $docker port dockerbld_members_1 4567 0.0.0.0:32772 $docker port dockerbld_members_2 4568 0.0.0.0:32776 Now you can connect to MySQL clients as usual: $mysql -h 0.0.0.0 -P 32768 -uroot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 10 Server version: 5.6.21-70.1 MySQL Community Server (GPL), wsrep_25.8.rXXXX Copyright (c) 2009-2015 Percona LLC and/or its affiliates Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> 6. To change MySQL options either pass it as a mount at runtime with something like volume: /tmp/my.cnf:/etc/my.cnf in docker-compose.yml or connect to container’s bash (docker exec -i -t container_name /bin/bash), then change my.cnf and run docker restart container_name Notes. If you don’t want to build use ready-to-use images If you don’t want to run Docker Compose as root user add yourself to docker group
TROLEE is online shopping portal based in India tendering fashion products to the customers worldwide. TROLEE offering wide range of products in the category of designer sarees, Salwar kameez, Kurtis, Exclusive Wedding Collection, Indian designer collection, Western outfits, Jeans, T-shirts, and Women’s Apparels at wholesale price in India. Metaphorically, TROLEE has been known as Shopping Paradise as customer always feel to Shop bigger than ever in each events organized by TROLEE. On each order shipping facility available free of cost in India and delivery can be done Worldwide. We have been appreciated by our customer for the Best Festival Offers and discounts with Assured Service, quality products. Just visit us trolee.com
TROLEE is online shopping portal based in India tendering fashion products to the customers worldwide. TROLEE offering wide range of products in the category of designer sarees, Salwar kameez, Kurtis, Exclusive Wedding Collection, Indian designer collection, Western outfits, Jeans, T-shirts, and Women’s Apparels at wholesale price in India. Metaphorically, TROLEE has been known as Shopping Paradise as customer always feel to Shop bigger than ever in each events organized by TROLEE. On each order shipping facility available free of cost in India and delivery can be done Worldwide. We have been appreciated by our customer for the Best Festival Offers and discounts with Assured Service, quality products. Just visit us trolee.com
written by tim brock. scatter plots are a wonderful way of showing ( apparent ) relationships in bivariate data. patterns and clusters that you wouldn't see in a huge block of data in a table can become instantly visible on a page or screen. with all the hype around big data in recent years it's easy to assume that having more data is always an advantage. but as we add more and more data points to a scatter plot we can start to lose these patterns and clusters. this problem, a result of overplotting, is demonstrated in the animation below. the data in the animation above is randomly generated from a pair of simple bivariate distributions. the distinction between the two distributions becomes less and less clear as we add more and more data. so what can we do about overplotting? one simple option is to make the data points smaller. (note this is a poor "solution" if many data points share exactly the same values.) we can also make them semi-transparent. and we can combine these two options: these refinements certainly help when we have ten thousand data points. however, by the time we've reached a million points the two distributions have seemingly merged in to one again. making points smaller and more transparent might help things; nevertheless, at some point we may have to consider a change of visualization. we'll get on to that later. but first let's try to supplement our visualization with some extra information. specifically let's visualize the marginal distributions . we have several options. there's far too much data for a rug plot , but we can bin the data and show histograms . or we can use a smoother option - a kernel density plot . finally, we could use the empirical cumulative distribution . this last option avoids any binning or smoothing but the results are probably less intuitive. i'll go with the kernel density option here, but you might prefer a histogram. the animated gif below is the same as the gif above but with the smoothed marginal distributions added. i've left scales off to avoid clutter and because we're only really interested in rough judgements of relative height. adding marginal distributions, particularly the distribution of variable 2, helps clarify that two different distributions are present in the bivariate data. the twin-peaked nature of variable 2 is evident whether there are a thousand data points or a million. the relative sizes of the two components is also clear. by contrast, the marginal distribution of variable 1 only has a single peak, despite coming from two distinct distributions. this should make it clear that adding marginal distributions is by no means a universal solution to overplotting in scatter plots. to reinforce this point, the animation below shows a completely different set of (generated) data points in a scatter plot with marginal distributions. the data again comes from a random sample of two different 2d distributions, but both marginal distributions of the complete dataset fail to highlight this separation. as previously, when the number of data points is large the distinction between the two clusters can't be seen from the scatter plot either. returning to point size and opacity, what do we get if we make the data points very small and almost completely transparent? we can now clearly distinguish two clusters in each dataset. it's difficult to make out any fine detail though. since we've lost that fine detail anyway, it seems apt to question whether we really want to draw a million data points. it can be tediously slow and impossible in certain contexts. 2d histograms are an alternative. by binning data we can reduce the number of points to plot and, if we pick an appropriate color scale, pick out some of the features that were lost in the clutter of the scatter plot. after some experimenting i picked a color scale that ran from black through green to white at the high end. note, this is (almost) the reverse of the effect created by overplotting in the scatter plots above. in both 2d histograms we can clearly see the two different clusters representing the two distributions from which the data is drawn. in the first case we can also see that there are more counts from the upper-left cluster than the bottom-right cluster, a detail that is lost in the scatter plot with a million data points (but more obvious from the marginal distributions). conversely, in the case of the second dataset we can see that the "heights" of the two clusters are roughly comparable. 3d charts are overused, but here (see below) i think they actually work quite well in terms of providing a broad picture of where the data is and isn't concentrated. feature occlusion is a problem with 3d charts so if you're going to go down this route when exploring your own data i highly recommend using software that allows for user interaction through rotation and zooming. in summary, scatter plots are a simple and often effective way of visualizing bivariate data. if, however, your chart suffers from overplotting, try reducing point size and opacity. failing that, a 2d histogram or even a 3d surface plot may be helpful. in the latter case be wary of occlusion.
16 Apr 2015 In a year defined by international expansion and investments in new infrastructure to enhance operational efficiency, Gulftainer recorded robust growth across its entire terminal portfolio. Iain Rawlinson, Group Commercial Director of Gulftainer said: “The positive growth recorded by Gulftainer across its terminals globally underlines the confidence of our partners in our ability to meet their requirements efficiently. Our extensive network and technological expertise are the strengths that have enabled us to expand our footprint to new locations. We continuously invest in enhancing our infrastructure, thus boosting reliability, operational efficiency and productivity.” He added: “The growth in volume achieved throughout our terminals is strong testament to the expertise and dedication of our employees and the strong productivity levels we are able to achieve on a consistent basis. In the dynamic global trade routes linking Asia and Europe, our terminals today play an increasingly significant role. Even as we expand and grow our business, we also remain committed to the communities we serve in by creating new jobs and supporting the domestic economy.” In global markets, Gulftainer’s Saudi terminals recorded impressive growth with Northern Container Terminal accounting for 1.9 million TEUs, sustaining previous-year trends, while Jubail Container Terminal (JCT) noted a growth of 22 per cent to over 396,000 TEUs. The total volume at the Saudi terminals was over 2.29 million TEUs. Gulftainer’s Umm Qasr terminal also accomplished a significant growth of 46 per cent in 2014, while the Recife terminal in Brazil marked a growth in volume of 7 per cent. Gulftainer’s UAE terminals recorded a total volume of 3.8 million TEUs in line with the all-round growth in business. The company marked another significant milestone, with the Sharjah Container Terminal (SCT) surpassing 400,000 TEUs in annual throughput for the very first time. Operations at SCT were energised by the positive growth in global trade and the arrival of new services, such as UASC’s Gulf India Service (GIS1), which now connects Sharjah with Sohar in Oman, Mundra in India and Karachi in Pakistan. The addition of this service represented a significant development for Sharjah and boosted the national carrier’s volumes through SCT last year. The only fully fledged operational container terminal in the UAE located outside the Strait of Hormuz, Khorfakkan Container Terminal (KCT) has today emerged as one of the most important transshipment hubs for the Arabian Gulf, the Indian Sub-continent, the Gulf of Oman and the East African markets. Further strengthening the operations at KCT, Gulftainer has received and commissioned new state-of-the-art Ship to Shore (STS) and Rubber Tyred Gantry (RTG) cranes that will further increase overall performance and productivity. This enhanced infrastructure marks an investment of over US$60 million. Gulftainer has set an ambitious target to triple the volume over the next decade through organic growth across existing businesses, exploring green field opportunities and potential M&A activities.
Gulftainer, a privately owned, independent terminal operating and logistics company, marked another significant milestone with the Sharjah Container Terminal (SCT) surpassing 400,000 TEUs (Twenty Foot Equivalent Units) in annual throughput during 2014. SCT has again recorded double-digit growth compared to last year’s volumes. The achievement was reached with an impressive safety record under challenging conditions including space constraints. Iain Rawlinson, Group Commercial Director of Gulftainer said that the professional approach of Gulftainer’s management, along with consistently high productivity levels, was a driving force behind the Terminal’s success. “SCT has always marketed itself as ‘The Flexible Alternative’ and the individual attention we extend to our customers offers us an advantage over competitors.” The 400,000th unit was discharged from Mag Container Lines’ vessel, ‘Mag Success’, one of the Terminal’s regular callers, which considers Sharjah as her base port. Speaking on behalf of Mag Line’s CEO, BDM Jamal Saleh congratulated the Terminal for its achievement. He said: “The announcement today reflects how Gulftainer and MCL have grown together over the years and, in partnership, managed to reach this target. The continuous support, flexibility and excellent operational performance MCL receives from Gulftainer, both operationally and logistically, has contributed greatly to this achievement.” The milestone was achieved on the shift of Duty Superintendent Mehmood Malik, the longest serving employee at over 38 years at the Terminal and part of the team when the first TEU crossed the quay. Mehmood has witnessed several records and milestones and recalls handling 2,500 TEUs in 1976: “At that time we could not imagine reaching the levels of throughput we have today, so this is a very special moment for me.” SCT, which is managed and operated by Gulftainer on behalf of the Sharjah Port Authority, has the honour of being the site of the first container terminal in the Gulf, commenced operations in 1976. SCT is located in the heart of Sharjah and is an ideal gateway for import and export cargo with direct links throughout the Gulf, Asia, Europe, Americas and Africa. The strong performance of the Sharjah economy has supported the growth of many of SCT’s customers, enabling them to increase their throughput and contribute to a record year for the Terminal. The relationships built with our customers have been strengthened by the joint efforts of Gulftainer’s sales and marketing team and the high levels of service and operational efficiency at the terminal, “When looking at the Sharjah market, the dedicated team at SCT listen to and address the many requirements of our diverse and interesting customer base,” said Iain Rawlinson. SCT’s figures have been further boosted with the arrival of new services throughout the year, including UASC’s Gulf India Service (GIS1), which now connects Sharjah with Sohar in Oman, Mundra in India and Karachi in Pakistan, which has boosted in the national carrier’s volumes through SCT in November and December. Gulftainer’s current portfolio covers UAE operations in Khorfakkan Port and Port Khalid in Sharjah as well as activities at Umm Qasr in Iraq, Recife in Brazil, Jeddah and Jubail in Saudi Arabia and in Tripoli Port in Lebanon, which will be operational in April 2016. It also marked another milestone in 2014 with its expansion to the US by signing a long-term agreement to operate the container and multi-cargo terminal at Port Canaveral in Florida. With a current handling activity of over 6 million TEUs, the company has set an ambitious target to triple the volume over the next decade through organic growth across existing businesses, exploring green field opportunities and potential M&A activities.