DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
R: Linear models with the lm function, NA values and Collinearity
In my continued playing around with R I’ve sometimes noticed ‘NA’ values in the linear regression models I created but hadn’t really thought about what that meant. On the advice of Peter Huber I recently started working my way through Coursera’s Regression Models which has a whole slide explaining its meaning: So in this case ‘z’ doesn’t help us in predicting Fertility since it doesn’t give us any more information that we can’t already get from ‘Agriculture’ and ‘Education’. Although in this case we know why ‘z’ doesn’t have a coefficient sometimes it may not be clear which other variable the NA one is highly correlated with. Multicollinearity (also collinearity) is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy. In that situation we can make use of the alias function to explain the collinearity as suggested in this StackOverflow post: library(datasets); data(swiss); require(stats); require(graphics) z <- swiss$Agriculture + swiss$Education fit = lm(Fertility ~ . + z, data = swiss) > alias(fit) Model : Fertility ~ Agriculture + Examination + Education + Catholic + Infant.Mortality + z Complete : (Intercept) Agriculture Examination Education Catholic Infant.Mortality z 0 1 0 1 0 0 In this case we can see that ‘z’ is highly correlated with both Agriculture and Education which makes sense given its the sum of those two variables. When we notice that there’s an NA coefficient in our model we can choose to exclude that variable and the model will still have the same coefficients as before: > require(dplyr) > summary(lm(Fertility ~ . + z, data = swiss))$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 66.9151817 10.70603759 6.250229 1.906051e-07 Agriculture -0.1721140 0.07030392 -2.448142 1.872715e-02 Examination -0.2580082 0.25387820 -1.016268 3.154617e-01 Education -0.8709401 0.18302860 -4.758492 2.430605e-05 Catholic 0.1041153 0.03525785 2.952969 5.190079e-03 Infant.Mortality 1.0770481 0.38171965 2.821568 7.335715e-03 > summary(lm(Fertility ~ ., data = swiss))$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 66.9151817 10.70603759 6.250229 1.906051e-07 Agriculture -0.1721140 0.07030392 -2.448142 1.872715e-02 Examination -0.2580082 0.25387820 -1.016268 3.154617e-01 Education -0.8709401 0.18302860 -4.758492 2.430605e-05 Catholic 0.1041153 0.03525785 2.952969 5.190079e-03 Infant.Mortality 1.0770481 0.38171965 2.821568 7.335715e-03 If we call alias now we won’t see any output: > alias(lm(Fertility ~ ., data = swiss)) Model : Fertility ~ Agriculture + Examination + Education + Catholic + Infant.Mortality
October 26, 2014
by Mark Needham
· 16,489 Views
article thumbnail
How to Avoid Hash Collisions When Using MySQL’s CRC32 Function
Originally Written by Arunjith Aravindan Percona Toolkit’s pt-table-checksum performs an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master – and the tool pt-table-sync synchronizes data efficiently between MySQL tables. The tools by default use the CRC32. Other good choices include MD5 and SHA1. If you have installed the FNV_64 user-defined function, pt-table-sync will detect it and prefer to use it, because it is much faster than the built-ins. You can also use MURMUR_HASH if you’ve installed that user-defined function. Both of these are distributed with Maatkit. For details please see the tool’s documentation. Below are test cases similar to what you might have encountered. By using the table checksum we can confirm that the two tables are identical and useful to verify a slave server is in sync with its master. The following test cases with pt-table-checksum and pt-table-sync will help you use the tools more accurately. For example, in a master-slave setup we have a table with a primary key on column “a” and a unique key on column “b”. Here the master and slave tables are not in sync and the tables are having two identical values and two distinct values. The pt-table-checksum tool should be able to identify the difference between master and slave and the pt-table-sync in this case should sync the tables with two REPLACE queries. +-----+-----+ +-----+-----+ | a | b | | a | b | +-----+-----+ +-----+-----+ | 2 | 1 | | 2 | 1 | | 1 | 2 | | 1 | 2 | | 4 | 3 | | 3 | 3 | | 3 | 4 | | 4 | 4 | +-----+-----+ +-----+-----+ Case 1: Non-cryptographic Hash function (CRC32) and the Hash collision. The tables in the source and target have two different columns and in general way of thinking the tools should identify the difference. But the below scenarios explain how the tools can be wrongly used and how to avoid them – and make things more consistent and reliable when using the tools in your production. The tools by default use the CRC32 checksums and it is prone to hash collisions. In the below case the non-cryptographic function (CRC32) is not able to identify the two distinct values as the function generates the same value even we are having the distinct values in the tables. CREATE TABLE `t1` ( `a` int(11) NOT NULL, `b` int(11) NOT NULL, PRIMARY KEY (`a`), UNIQUE KEY `b` (`b`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Master Slave +-----+-----+ +-----+-----+ | a | b | | a | b | +-----+-----+ +-----+-----+ | 2 | 1 | | 2 | 1 | | 1 | 2 | | 1 | 2 | | 4 | 3 | | 3 | 3 | | 3 | 4 | | 4 | 4 | +-----+-----+ +-----+-----+ Master: [root@localhost mysql]# pt-table-checksum --replicate=percona.checksum --create-replicate-table --databases=db1 --tables=t1 localhost --user=root --password=*** --no-check-binlog-format TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 09-17T00:59:45 0 0 4 1 0 1.081 db1.t1 Slave: [root@localhost bin]# ./pt-table-sync --print --execute --replicate=percona.checksum --tables db1.t1 --user=root --password=*** --verbose --sync-to-master 192.**.**.** # Syncing via replication h=192.**.**.**,p=...,u=root # DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE Narrowed down to BIT_XOR: Master: mysql> SELECT BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) FROM `db1`.`t1`; +------------------------------------------------------------+ | BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) | +------------------------------------------------------------+ | 6581445 | +------------------------------------------------------------+ 1 row in set (0.00 sec) Slave: mysql> SELECT BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) FROM `db1`.`t1`; +------------------------------------------------------------+ | BIT_XOR(CAST(CRC32(CONCAT_WS('#', `a`, `b`)) AS UNSIGNED)) | +------------------------------------------------------------+ | 6581445 | +------------------------------------------------------------+ 1 row in set (0.16 sec) Case 2: As the tools are not able to identify the difference, let us add a new row to the slave and check if the tools are able to identify the distinct values. So I am adding a new row (5,5) to the slave. mysql> insert into db1.t1 values(5,5); Query OK, 1 row affected (0.05 sec) Master Slave +-----+-----+ +-----+-----+ | a | b | | a | b | +-----+-----+ +-----+-----+ | 2 | 1 | | 2 | 1 | | 1 | 2 | | 1 | 2 | | 4 | 3 | | 3 | 3 | | 3 | 4 | | 4 | 4 | +-----+-----+ | 5 | 5 | +-----+-----+ [root@localhost mysql]# pt-table-checksum --replicate=percona.checksum --create-replicate-table --databases=db1 --tables=t1 localhost --user=root --password=*** --no-check-binlog-format TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 09-17T01:01:13 0 1 4 1 0 1.054 db1.t1 [root@localhost bin]# ./pt-table-sync --print --execute --replicate=percona.checksum --tables db1.t1 --user=root --password=*** --verbose --sync-to-master 192.**.**.** # Syncing via replication h=192.**.**.**,p=...,u=root # DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE DELETE FROM `db1`.`t1` WHERE `a`='5' LIMIT 1 /*percona-toolkit src_db:db1 src_tbl:t1 src_dsn:P=3306,h=192.**.**.**. 10,p=...,u=root dst_db:db1 dst_tbl:t1 dst_dsn:h=192.**.**.**,p=...,u=root lock:1 transaction:1 changing_src:percona.checksum replicate:percona.checksum bidirectional:0 pid:5205 user:root host:localhost.localdomain*/; REPLACE INTO `db1`.`t1`(`a`, `b`) VALUES ('3', '4') /*percona-toolkit src_db:db1 src_tbl:t1 src_dsn:P=3306,h=192.**.**.**, p=...,u=root dst_db:db1 dst_tbl:t1 dst_dsn:h=192.**.**.**,p=...,u=root lock:1 transaction:1 changing_src:percona.checksum replicate:percona.checksum bidirectional:0 pid:5205 user:root host:localhost.localdomain*/; REPLACE INTO `db1`.`t1`(`a`, `b`) VALUES ('4', '3') /*percona-toolkit src_db:db1 src_tbl:t1 src_dsn:P=3306,h=192.**.**.**, p=...,u=root dst_db:db1 dst_tbl:t1 dst_dsn:h=192.**.**.**,p=...,u=root lock:1 transaction:1 changing_src:percona.checksum replicate:percona.checksum bidirectional:0 pid:5205 user:root host:localhost.localdomain*/; # 1 2 0 0 Chunk 01:01:43 01:01:43 2 db1.t1 Well, apparently the tools are now able to identify the newly added row in the slave and the two other rows having the difference. Case 3: Advantage of Cryptographic Hash functions (Ex: Secure MD5) As such let us make the tables as in the case1 and ask the tools to use the cryptographic (secure MD5) hash functions instead the usual non-cryptographic function. The default CRC32 function provides no security due to their simple mathematical structure and too prone to hash collisions but the MD5 provides better level of integrity. So let us try with the –function=md5 and see the result. Master Slave +-----+-----+ +-----+-----+ | a | b | | a | b | +-----+-----+ +-----+-----+ | 2 | 1 | | 2 | 1 | | 1 | 2 | | 1 | 2 | | 4 | 3 | | 3 | 3 | | 3 | 4 | | 4 | 4 | +-----+-----+ +-----+-----+ Narrowed down to BIT_XOR: Master: mysql> SELECT 'test', 't2', '1', NULL, NULL, NULL, COUNT(*) AS cnt, COALESCE(LOWER(CONCAT(LPAD(CONV(BIT_XOR(CAST(CONV(SUBSTRING (@crc, 1, 16), 16, 10) AS UNSIGNED)), 10, 16), 16, '0'), LPAD(CONV(BIT_XOR(CAST(CONV(SUBSTRING(@crc := md5(CONCAT_WS('#', `a`, `b`)) , 17, 16), 16, 10) AS UNSIGNED)), 10, 16), 16, '0'))), 0) AS crc FROM `db1`.`t1`; +------+----+---+------+------+------+-----+----------------------------------+ | test | t2 | 1 | NULL | NULL | NULL | cnt | crc | +------+----+---+------+------+------+-----+----------------------------------+ | test | t2 | 1 | NULL | NULL | NULL | 4 | 000000000000000063f65b71e539df48 | +------+----+---+------+------+------+-----+----------------------------------+ 1 row in set (0.00 sec) Slave: mysql> SELECT 'test', 't2', '1', NULL, NULL, NULL, COUNT(*) AS cnt, COALESCE(LOWER(CONCAT(LPAD(CONV(BIT_XOR(CAST(CONV(SUBSTRING (@crc, 1, 16), 16, 10) AS UNSIGNED)), 10, 16), 16, '0'), LPAD(CONV(BIT_XOR(CAST(CONV(SUBSTRING(@crc := md5(CONCAT_WS('#', `a`, `b`)) , 17, 16), 16, 10) AS UNSIGNED)), 10, 16), 16, '0'))), 0) AS crc FROM `db1`.`t1`; +------+----+---+------+------+------+-----+----------------------------------+ | test | t2 | 1 | NULL | NULL | NULL | cnt | crc | +------+----+---+------+------+------+-----+----------------------------------+ | test | t2 | 1 | NULL | NULL | NULL | 4 | 0000000000000000df024e1a4a32c31f | +------+----+---+------+------+------+-----+----------------------------------+ 1 row in set (0.00 sec) [root@localhost mysql]# pt-table-checksum --replicate=percona.checksum --create-replicate-table --function=md5 --databases=db1 --tables=t1 localhost --user=root --password=*** --no-check-binlog-format TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 09-23T23:57:52 0 1 12 1 0 0.292 db1.t1 [root@localhost bin]# ./pt-table-sync --print --execute --replicate=percona.checksum --tables db1.t1 --user=root --password=amma --verbose --function=md5 --sync-to-master 192.***.***.*** # Syncing via replication h=192.168.56.102,p=...,u=root # DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE REPLACE INTO `db1`.`t1`(`a`, `b`) VALUES ('3', '4') /*percona-toolkit src_db:db1 src_tbl:t1 src_dsn:P=3306,h=192.168.56.101,p=..., u=root dst_db:db1 dst_tbl:t1 dst_dsn:h=192.***.***.***,p=...,u=root lock:1 transaction:1 changing_src:percona.checksum replicate:percona.checksum bidirectional:0 pid:5608 user:root host:localhost.localdomain*/; REPLACE INTO `db1`.`t1`(`a`, `b`) VALUES ('4', '3') /*percona-toolkit src_db:db1 src_tbl:t1 src_dsn:P=3306,h=192.168.56.101,p=..., u=root dst_db:db1 dst_tbl:t1 dst_dsn:h=192.***.**.***,p=...,u=root lock:1 transaction:1 changing_src:percona.checksum replicate:percona.checksum bidirectional:0 pid:5608 user:root host:localhost.localdomain*/; # 0 2 0 0 Chunk 04:46:04 04:46:04 2 db1.t1 Master Slave +-----+-----+ +-----+-----+ | a | b | | a | b | +-----+-----+ +-----+-----+ | 2 | 1 | | 2 | 1 | | 1 | 2 | | 1 | 2 | | 4 | 3 | | 4 | 3 | | 3 | 4 | | 3 | 4 | +-----+-----+ +-----+-----+ The MD5 did the trick and solved the problem. See the BIT_XOR result for the MD5 given above and the function is able to identify the distinct values in the tables and resulted with the different crc values. The MD5 (Message-Digest algorithm 5) is a well-known cryptographic hash function with a 128-bit resulting hash value. MD5 is widely used in security-related applications, and is also frequently used to check the integrity but MD5() and SHA1() are very CPU-intensive with slower checksumming if chunk-time is included.
October 24, 2014
by Peter Zaitsev
· 6,737 Views
article thumbnail
AngularJS: Different Ways of Using Array Filters
In this article, we learn how to utilize AngularJS's array filter features in a variety of use cases. Read on to get started.
October 24, 2014
by Siva Prasad Reddy Katamreddy
· 315,132 Views · 5 Likes
article thumbnail
Parsing a Tab Delimited File Using OpenCSV
I prefer OpenCSV for CSV parsing in Java. That library also supports parsing of tab delimited files, here's how: Just a quick gist: import au.com.bytecode.opencsv.CSVReader; public class LoadTest { @Test public void testLoad(String row) throws IOException, JobNotFoundException, InterruptedException { CSVReader reader = new CSVReader(new FileReader("/Users/bone/Desktop/foo_data.tab"), '\t'); String[] record; while ((record = reader.readNext()) != null) { for (String value : record) { System.out.println(value); // Debug only } } } }
October 24, 2014
by Brian O' Neill
· 23,895 Views
article thumbnail
Metal Kernel Functions / Compute Shaders in Swift
As part of a project to create a GPU based reaction diffusion simulation, I stated to look at using Metal in Swift this weekend. I've done similar work in the past targeting the Flash Player and using AGAL. Metal is a far higher level language than AGAL: it's based on C++ with a richer syntax and includes compute functions. Whereas in AGAL, to run cellular automata, I'd create a rectangle out of two triangles with a vertex shader and execute the reaction diffusion functions in a separate fragment shader, a compute shader is more direct: I can get and set textures and it can operate of individual pixels of that texture without the need for a vertex shader. The Swift code I discuss in this article is based heavily on two articles at Metal By Example: Introduction to Compute Programming in Metal and Fundamentals of Image Processing in Metal. Both of which include Objective-C source code, so hopefully my Swift implementation will help some. My application has four main steps: initialise Metal, create a Metal texture from a UIImage, apply a kernel function to that texture, convert the newly generated texture back into a UIImage and display it. I'm using a simple example shader that changes the saturation of the input image. so I've also added a slider that changes the saturation value. Let's look at each step one by one: Initialising Metal Initialising Metal is pretty simple: inside my view controller's overridden viewDidLoad(), I create a pointer to the default Metal device: var device: MTLDevice! = nil [...] device = MTLCreateSystemDefaultDevice() I also need to create a library and command queue: defaultLibrary = device.newDefaultLibrary() commandQueue = device.newCommandQueue() Finally, I add a reference to my Metal function to the library and synchronously create and compile a compute pipeline state: et kernelFunction = defaultLibrary.newFunctionWithName("kernelShader") pipelineState = device.newComputePipelineStateWithFunction(kernelFunction!, error: nil) The kernelShader points to the saturation image processing function, written in Metal, that lives in my Shaders.metal file: kernel void kernelShader(texture2d inTexture [[texture(0)]], texture2d outTexture [[texture(1)]], constant AdjustSaturationUniforms &uniforms [[buffer(0)]], uint2 gid [[thread_position_in_grid]]) { float4 inColor = inTexture.read(gid); float value = dot(inColor.rgb, float3(0.299, 0.587, 0.114)); float4 grayColor(value, value, value, 1.0); float4 outColor = mix(grayColor, inColor, uniforms.saturationFactor); outTexture.write(outColor, gid); } Creating a Metal Texture from a UIIMage There are a few steps in converting a UIImage into a MTLTexture instance. I create an array of UInt8 to hold an emptyCGBitmapInfo, then use CGContextDrawImage() to copy the image into a bitmap context let image = UIImage(named: "grand_canyon.jpg") let imageRef = image.CGImage let imageWidth = CGImageGetWidth(imageRef) let imageHeight = CGImageGetHeight(imageRef) let bytesPerRow = bytesPerPixel * imageWidth var rawData = [UInt8](count: Int(imageWidth * imageHeight * 4), repeatedValue: 0) let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw()) let context = CGBitmapContextCreate(&rawData, imageWidth, imageHeight, bitsPerComponent, bytesPerRow, rgbColorSpace, bitmapInfo) CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(imageWidth), CGFloat(imageHeight)), imageRef) Once all of those steps have executed, I can create a new texture use its replaceRegion() method to write the image into it: let textureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(MTLPixelFormat.RGBA8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true) texture = device.newTextureWithDescriptor(textureDescriptor) let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight)) texture.replaceRegion(region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: Int(bytesPerRow)) I also create an empty texture which the kernel function will write into: let outTextureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(texture.pixelFormat, width: texture.width, height: texture.height, mipmapped: false) outTexture = device.newTextureWithDescriptor(outTextureDescriptor) Invoking the Kernel Function The next block of work is to set the textures and another variable on the kerne function and execute the shader. The first step is to instantiate a command buffer and command encoder: let commandBuffer = commandQueue.commandBuffer() let commandEncoder = commandBuffer.computeCommandEncoder() ...then set the pipeline state (we got from device.newComputePipelineStateWithFunction() earlier) and textures on the command encoder: commandEncoder.setComputePipelineState(pipelineState) commandEncoder.setTexture(texture, atIndex: 0) commandEncoder.setTexture(outTexture, atIndex: 1) The filter requires an addition parameter that defines the saturation amount. This is passed into the shader via anMTLBuffer. To populate the buffer, I've created a small struct: struct AdjustSaturationUniforms { var saturationFactor: Float } Then newBufferWithBytes() to pass in my saturationFactor float value: var saturationFactor = AdjustSaturationUniforms(saturationFactor: self.saturationFactor) var buffer: MTLBuffer = device.newBufferWithBytes(&saturationFactor, length: sizeof(AdjustSaturationUniforms), options: nil) commandEncoder.setBuffer(buffer, offset: 0, atIndex: 0) This is now accessible inside the shader as an argument to its kernel function: constant AdjustSaturationUniforms &uniforms [[buffer(0)]] Now I'm ready invoke the function itself. Metal kernel functions use thread groups to break up their workload into chunks. In my example, I create 64 thread groups, then send them off to the GPU: let threadGroupCount = MTLSizeMake(8, 8, 1) let threadGroups = MTLSizeMake(texture.width / threadGroupCount.width, texture.height / threadGroupCount.height, 1) commandQueue = device.newCommandQueue() commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount) commandEncoder.endEncoding() commandBuffer.commit() commandBuffer.waitUntilCompleted() Converting the Texture to a UIImage Finally, now that the kernel function has executed, we need to do the reverse of above and get the image held inoutTexture into a UIImage so it can be displayed. Again, I use a region to define the size and the texture's getBytes() to populate an array on UInt8: let imageSize = CGSize(width: texture.width, height: texture.height) let imageByteCount = Int(imageSize.width * imageSize.height * 4) let bytesPerRow = bytesPerPixel * UInt(imageSize.width) var imageBytes = [UInt8](count: imageByteCount, repeatedValue: 0) let region = MTLRegionMake2D(0, 0, Int(imageSize.width), Int(imageSize.height)) outTexture.getBytes(&imageBytes, bytesPerRow: Int(bytesPerRow), fromRegion: region, mipmapLevel: 0) Now that imageBytes holds the raw data, it's a few lines to create a CGImage: let providerRef = CGDataProviderCreateWithCFData( NSData(bytes: &imageBytes, length: imageBytes.count * sizeof(UInt8)) ) let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw()) let renderingIntent = kCGRenderingIntentDefault let imageRef = CGImageCreate(UInt(imageSize.width), UInt(imageSize.height), bitsPerComponent, bitsPerPixel, bytesPerRow, rgbColorSpace, bitmapInfo, providerRef, nil, false, renderingIntent) imageView.image = UIImage(CGImage: imageRef) ...and we're done! Metal requires an A7 or A8 processor and this code has been built and tested under Xcode 6. All the source code is available at my GitHub repository here.
October 23, 2014
by Simon Gladman
· 11,689 Views
article thumbnail
Understanding Information Retrieval by Using Apache Lucene and Tika - Part 1
introduction in this tutorial, the apache lucene and apache tika frameworks will be explained through their core concepts (e.g. parsing, mime detection, content analysis, indexing, scoring, boosting) via illustrative examples that should be applicable to not only seasoned software developers but to beginners to content analysis and programming as well. we assume you have a working knowledge of the java™ programming language and plenty of content to analyze. throughout this tutorial, you will learn: how to use apache tika's api and its most relevant functions how to develop code with apache lucene api and its most important modules how to integrate apache lucene and apache tika in order to build your own piece of software that stores and retrieves information efficiently. (project code is available for download) what are lucene and tika? according to apache lucene's site, apache lucene represents an open source java library for indexing and searching from within large collections of documents. the index size represents roughly 20-30% the size of text indexed and the search algorithms provide features like: ranked searching - best results returned first many powerful query types: phrase queries, wildcard queries, proximity queries, range queries and more. in this tutorial we will demonstrate only phrase queries. fielded search (e.g. title, author, contents) sorting by any field flexible faceting, highlighting, joins and result grouping pluggable ranking models, including the vector space model and okapi bm25 but lucene's main purpose is to deal directly with text and we want to manipulate documents, who have various formats and encoding. for parsing document content and their properties the apache tika library it is necessary. apache tika is a library that provides a flexible and robust set of interfaces that can be used in any context where metadata analyzis and structured text extraction is needed. the key component of apache tika is the parser (org.apache.tika.parser.parser ) interface because it hides the complexity of different file formats while providing a simple and powerful mechanism to extract structured text content and metadata from all sorts of documents. criterias for tika parsing design streamed parsing the interface should require neither the client application nor the parser implementation to keep the full document content in memory or spooled to disk. this allows even huge documents to be parsed without excessive resource requirements. structured content a parser implementation should be able to include structural information (headings, links, etc.) in the extracted content. a client application can use this information for example to better judge the relevance of different parts of the parsed document. input metadata a client application should be able to include metadata like the file name or declared content type with the document to be parsed. the parser implementation can use this information to better guide the parsing process. output metadata a parser implementation should be able to return document metadata in addition to document content. many document formats contain metadata like the name of the author that may be useful to client applications. context sensitivity while the default settings and behaviour of tika parsers should work well for most use cases, there are still situations where more fine-grained control over the parsing process is desirable. it should be easy to inject such context-specific information to the parsing process without breaking the layers of abstraction. requirements maven 2.0 or higher java 1.6 se or higher lesson 1: automate metadata extraction from any file type our premisses are the following: we have a collection of documents stored on disk/database and we would like to index them; these documents can be word documents, pdfs, htmls, plain text files etc. as we are developers, we would like to write reusable code that extracts file properties regarding format (metadata) and file content. apache tika has a mimetype repository and a set of schemes (any combination of mime magic, url patterns, xml root characters, or file extensions) to determine if a particular file, url, or piece of content matches one of its known types. if the content does match, tika has detected its mimetype and can proceed to select the appropriate parser. in the sample code, the file type detection and its parsing is being covered inside the class com.retriever.lucene.index.indexcreator , method indexfile. listing 1.1 analyzing a file with tika public static documentwithabstract indexfile(analyzer analyzer, file file) throws ioexception { metadata metadata = new metadata(); contenthandler handler = new bodycontenthandler(10 * 1024 * 1024); parsecontext context = new parsecontext(); parser parser = new autodetectparser(); inputstream stream = new fileinputstream(file); //open stream try { parser.parse(stream, handler, metadata, context); //parse the stream } catch (tikaexception e) { e.printstacktrace(); } catch (saxexception e) { e.printstacktrace(); } finally { stream.close(); //close the stream } //more code here } the above code displays how a file it is being parsed using org.apache.tika.parser.autodetectparser; this kind of implementation was chosen because we would like to achieve parsing documents disregarding their format. also, for handling the content the org.apache.tika.sax.bodycontenthandler wasconstructed with a writelimit given as parameter ( 10*1024*1024); this type of constructor creates a content handler that writes xhtml body character events to an internal string buffer and in case of documents with large content is less likely to throw a saxexception (thrown when the default write limit is reached). as a result of our parsing we have obtained a metadata object that we can now use to detect file properties (title or any other header specific to a document format). metadata processing can be done as described below ( com.retriever.lucene.index.indexcreator , method indexfiledescriptors) : listing 1.2 processing metadata private static document indexfiledescriptors(string filename, metadata metadata) { document doc = new document(); //store file name in a separate textfield doc.add(new textfield(isearchconstants.field_file, filename, store.yes)); for (string key : metadata.names()) { string name = key.tolowercase(); string value = metadata.get(key); if (stringutils.isblank(value)) { continue; } if ("keywords".equalsignorecase(key)) { for (string keyword : value.split(",?(\\s+)")) { doc.add(new textfield(name, keyword, store.yes)); } } else if (isearchconstants.field_title.equalsignorecase(key)) { doc.add(new textfield(name, value, store.yes)); } else { doc.add(new textfield(name, filename, store.no)); } } in the method presented above we store the file name in a separate field and also the document's title ( a document can have a title different from its file name); we are not interested in storing other informations.
October 22, 2014
by Ana-Maria Mihalceanu
· 18,361 Views · 4 Likes
article thumbnail
Measuring String Concatenation in Logging
introduction i had an interesting conversation today about the cost of using string concatenation in log statements. in particular, debug log statements often need to print out parameters or the current value of variables, so they need to build the log message dynamically. so you wind up with something like this: logger.debug("parameters: " + param1 + "; " + param2); the issue arises when debug logging is turned off. inside the logger.debug() statement a flag is checked and the method returns immediately; this is generally pretty fast. but the string concatenation had to occur to build the parameter prior to calling the method, so you still pay its cost. since debug tends to be turned off in production, this is the time when this difference matters. for this reason, we have pretty much all been trained to do this: if (logger.isdebugenabled()) { logger.debug("parameters: " + param1 + "; " + param2); } the discussion was about how much difference this “good practice” makes. caliper this kind of question is perfect for a micro benchmark. my own favorite tool for this purpose is caliper . caliper runs small snippets of code enough times to average out variations. it passes in a number of repetitions, which it calculates in order to make sure that the whole method takes long enough to measure given the resolution of the system clock. caliper also detects garbage collection and hotspot compiling that might impact the accuracy of the tests. caliper uploads results to a google app engine application. its sign-in supports google logins and issues an api key that can be used to organize results and list them. a typical timing methods looks like this: public string timemultstringnocheck(long reps) { for (int i = 0; i < reps; i++) { logger.debug(strings[0] + " " + strings[1] + " " + strings[2] + " " + strings[3] + " " + strings[4]); } return strings[0]; } the return string is not used; it is included in the method solely to ensure that java does not optimize away the method. similarly, the content of the variables used should be randomly generated to avoid compile-time optimization. the full example is available in one of my github repositories , in the org.anvard.introtojava.log package. results the outcome is pretty interesting. string concatenation creates a pretty significant penalty, around two orders of magnitude for our example that concatenates five strings. interesting is that even in the case where we do not use string concatenation (i.e. the simplestring methods), the penalty is around 4x. this is probably the time spent pushing the string parameter onto the stack. the examples with doubles, using string.format() , is even more extreme, four orders of magnitude. the elapsed time here about 4ms, large enough that if the log statement were in a commonly used method, the performance hit would be noticeable. the final method, multstringparams , uses a feature that is available in the slf4j api. it works similarly to string.format() , but in a simple token replace fashion. most importantly, it does not perform the token replace unless the logging level is enabled. this makes this form just as fast as the “check” forms, but in a more compact form. of course, this only works if no special formatting is needed of the log string, or if the formatting can be shifted to a method such as tostring() . what is especially surprising is that this method did not show a penalty in building the object array necessary to pass the parameters into the method. this may have been optimized out by the java runtime since there was no chance of the parameters being used. conclusion the practice of checking whether a logging level is enabled before building the log statement is certainly worthwhile and should be something teams check during peer review.
October 20, 2014
by Alan Hohn
· 10,022 Views
article thumbnail
Functional Programming with Java 8 Functions
Learn how to use lambda expressions and anonymous functions in Java 8.
October 20, 2014
by Edwin Dalorzo
· 328,803 Views · 25 Likes
article thumbnail
Sound Synthesis in Swift: A Core Audio Tone Generator
The other day, Morgan at Swift London mentioned it may be interesting to use my Swift node based user interface as the basis for an audio synthesiser. Always ready for a challenge, I've spent a little time looking into Core Audio to see how this could be done and created a little demonstration app: a multi channel tone generator. I've done something not too dissimilar targeting Flash in the past. In that project, I had to manually create 8192 samples to synthesise a tone and ended up using ActionScript Workers to do that work in the background. After some poking around, it seems a similar approach can be taken in Core Audio, but to create pure sine waves there's a far simpler way: using Audio Toolbox. Luckily, I found this excellent article by Gene De Lisa. His SwiftSimpleGraph project has all the boilerplate code to create sine waves, so all I had to do was add a user interface. My project contains four ToneWidget instances, each of which contain numeric dials for frequency and velocity and asine wave renderer. There's also an additional sine wave renderer that displays the composite frequency of all four channels. The sine wave renderer creates a bitmap image of the wave based on some code Joseph Lord tweaked in my Swift reaction diffusion code in the summer. It exposes a setFrequencyVelocityPairs() method which accepts an array ofFrequencyVelocityPair instances. When that's invoked, the drawSineWave() method loops over the array creating and summing vertical values for each column in the bitmap: for i in 1 ..< Int(frame.width) { let scale = M_PI * 5 let curveX = Double(i) var curveY = Double(frame.height / 2) for pair in frequencyVelocityPairs { let frequency = Double(pair.frequency) / 127.0 let velocity = Double(pair.velocity) / 127.0 curveY += ((sin(curveX / scale * frequency * 5)) * (velocity * 10)) } [...] Then, rather than simply plotting the generated position for each column, it draws a line between the newly calculated vertical position for the current column and the previous vertical position for the previous column: for yy in Int(min(previousCurveY, curveY)) ... Int(max(previousCurveY, curveY)) { let pixelIndex : Int = (yy * Int(frame.width) + i); pixelArray[pixelIndex].r = UInt8(255 * colorRef[0]); pixelArray[pixelIndex].g = UInt8(255 * colorRef[1]); pixelArray[pixelIndex].b = UInt8(255 * colorRef[2]); } The result is a continuous wave: Because the sine wave renderer accepts an array of values, I'm able to use the same component for both the individual channels (where the array length is one) and for the composite of all four (where the array length is four). I was ready to use GCD to do the sine wave drawing in a background thread, but both the simulator and my old iPad happily do this in the main thread keeping the user interface totally responsive. Up in the view controller, I have an array containing each widget and another array containing the currently playing notes which is populated inside viewDidLoad(): override func viewDidLoad() { super.viewDidLoad() view.addSubview(sineWaveRenderer) for i in 0 ... 3 { let toneWidget = ToneWidget(index: i, frame: CGRectZero) toneWidget.addTarget(self, action: "toneWidgetChangeHandler:", forControlEvents: UIControlEvents.ValueChanged) toneWidgets.append(toneWidget) view.addSubview(toneWidget) currentNotes.append(toneWidget.getFrequencyVelocityPair()) soundGenerator.playNoteOn(UInt32(toneWidget.getFrequencyVelocityPair().frequency), velocity: UInt32(toneWidget.getFrequencyVelocityPair().velocity), channelNumber: UInt32(toneWidget.getIndex())) } } When the frequency or the velocity is changed by the user in any of those widgets, I use the value in thecurrentNotes array to switch off the currently playing note, update the master sine wave renderer and play the new note: func toneWidgetChangeHandler(toneWidget : ToneWidget) { soundGenerator.playNoteOff(UInt32(currentNotes[toneWidget.getIndex()].frequency), channelNumber: UInt32(toneWidget.getIndex())) updateSineWave() soundGenerator.playNoteOn(UInt32(toneWidget.getFrequencyVelocityPair().frequency), velocity: UInt32(toneWidget.getFrequencyVelocityPair().velocity), channelNumber: UInt32(toneWidget.getIndex())) currentNotes[toneWidget.getIndex()] = toneWidget.getFrequencyVelocityPair() } The updateSineWave() method simply loops over the widgets, gets each value and passes those into the sine wave renderer: func updateSineWave() { var values = [FrequencyVelocityPair]() for widget in toneWidgets { values.append(widget.getFrequencyVelocityPair()) } sineWaveRenderer.setFrequencyVelocityPairs(values) } Once again, big thanks to Gene De Lisa who has really done all the hard work navigating the Core Audio and Audio Toolbox API research on this. All of the source code for this project is available at my GitHub repository here.
October 17, 2014
by Simon Gladman
· 11,931 Views · 1 Like
article thumbnail
NetBeans 8.0's Five New Performance Hints
NetBeans 8.0 introduces several new Java hints. Although there are a large number of these new hints related to Java Persistence API, I focus on five new hints in the Performance category. The five new "Performance Hints" introduced with NetBeans 8.0 are: Boxing of already boxed value Redundant String.toString() Replace StringBuffer/StringBuilder by String Unnecessary temporary during conversion from String Unnecessary temporary during conversion to String Each of these five performance-related Java hints are illustrated in this post with screen snapshots taken from NetBeans 8.0 with code that demonstrates these hints. There are two screen snapshots for each hint, one each showing the text displayed when the cursor hovers over the line of code marked with yellow underlining and one each showing the suggested course of action to be applied to address that hint (shown when clicking on the yellow light bulb to the left of the flagged line). Some of the captured screen snapshots include examples of code that avoid the hint. Boxing of Already Boxed Value Redundant String.toString() Replace StringBuffer/StringBuilder by String Unnecessary Temporary During Conversion from String Unnecessary Temporary During Conversion to String Unless I've done something incorrectly, there appears to be a minor bug with this hint in that it reports "Unnecessary temporary when converting from String" when, in this case, it should really be "Unnecessary temporary when converting to String". This is not a big deal as the condition is flagged and the action to fix it seems appropriate. Conclusion The five performance-related hints introduced by NetBeans 8.0 and illustrated here can help Java developers avoid unnecessary object instantiations and other unnecessary runtime cost. Although the benefit of this optimization as shown in my simple examples is almost negligible, it could lead to much greater savings when used in code with loops that perform these same unnecessary instantiations thousands of times. Even without consideration of the performance benefit, these hints help to remind Java developers and teach developers new to Java about the most appropriate mechanisms for acquiring instances and primitive values.
October 16, 2014
by Dustin Marx
· 6,588 Views
article thumbnail
Building HTML5 Form Validation Bubble Replacements
Written by TJ VanToll I’ve written and about HTML5 form validation over the last few years, and one of the most common questions I get is about bubbles. By bubbles I mean the UI controls browsers display validation errors in. Below you can see Chrome’s, Firefox’s, and IE’s implementations: For whatever reason, we developers (or more likely our designer colleagues) have a deep-seated desire to style these things. But unfortunately we can’t, as zero browsers provide styling hooks that target these bubbles. Chromeused to provide a series of vendor prefixed pseudo-elements (::-webkit-validation-bubble-*), but theywere removed in Chrome 28. So what’s a developer to do? Well, although browsers don’t allow you to customize their bubbles, the constraint validation spec does allow you to suppress the browser’s bubble UI and build your own. The rest of the article shows you how to do just that. Warning: Don’t go down the path of building a bubble replacement lightly. With the default bubbles you get some pretty complex functionality for free, such as positioning and accessibility. With custom bubbles you have to address those concerns yourself (or use a library that does). Suppressing the Default Bubbles The first step to building a custom UI is suppressing the native bubbles. You do that by listening to each form control’s invalid event and preventing its default behavior. For example, the following form uses no validation bubbles because of the event.preventDefault() call in the invalid event handler. Submit The invalid event does not bubble (no pun intended), so if you want to prevent the validation bubbles on multiple elements you must attach a capture-phase listener. If you’re confused on the difference between the bubbling and capturing phases of DOM events, check out this MDN article for a pretty good explanation. The following code prevents bubbles on both inputs with a single invalid event listener on the parent element: Submit You can use this approach to remove the browser’s UI for form validation, but once you do so, you have build something custom to replace it. Building alternative UIs There are countless ways of displaying form validation errors, and none of them are necessarily right or wrong. (Ok, there are some wrong ways of doing this.) Let’s look at a few common approaches that you may want to take. For each, we’ll use the very simple name and email address form below. We’ll also use some simple CSS to make this form look halfway decent. Name: Email: Submit One important note before we get started: all of these UIs only work in Internet Explorer version 10 and up, as theconstraint validation API is not available in older versions. If you need to support old IE and still want to use HTML5 form validation, check out this slide deck in which I outline some options for doing so. Alternative UI #1: List of messages A common way of showing validation errors is in a box on the top of the screen, and this behavior is relatively easy to build with the HTML5 form validation APIs. Before we get into the code, here’s what it looks like in action. Here’s the code you need to build that UI: function replaceValidationUI( form ) { // Suppress the default bubbles form.addEventListener( "invalid", function( event ) { event.preventDefault(); }, true ); // Support Safari, iOS Safari, and the Android browser—each of which do not prevent // form submissions by default form.addEventListener( "submit", function( event ) { if ( !this.checkValidity() ) { event.preventDefault(); } }); // Add a container to hold error messages form.insertAdjacentHTML( "afterbegin", "" ); var submitButton = form.querySelector( "button:not([type=button]), input[type=submit]" ); submitButton.addEventListener( "click", function( event ) { var invalidFields = form.querySelectorAll( ":invalid" ), listHtml = "", errorMessages = form.querySelector( ".error-messages" ), label; for ( var i = 0; i < invalidFields.length; i++ ) { label = form.querySelector( "label[for=" + invalidFields[ i ].id + "]" ); listHtml += "" + label.innerHTML + " " + invalidFields[ i ].validationMessage + ""; } // Update the list with the new error messages errorMessages.innerHTML = listHtml; // If there are errors, give focus to the first invalid field and show // the error messages container if ( invalidFields.length > 0 ) { invalidFields[ 0 ].focus(); errorMessages.style.display = "block"; } }); } // Replace the validation UI for all forms var forms = document.querySelectorAll( "form" ); for ( var i = 0; i < forms.length; i++ ) { replaceValidationUI( forms[ i ] ); } This example assumes that each form field has a corresponding , where the id attribute of the form field matches the for attribute of the . You may want to tweak the code that builds the messages themselves to match your applications, but other than that this should be something can simply drop in. Alternative UI #2: Messages under fields Sometimes, instead of showing a list of messages on the top of the screen you want to associate a message with its corresponding field. Here’s a UI that does that. To build this UI, most of the code remains the same as the first approach, with a few subtle differences in the submit button’s click event handler. function replaceValidationUI( form ) { // Suppress the default bubbles form.addEventListener( "invalid", function( event ) { event.preventDefault(); }, true ); // Support Safari, iOS Safari, and the Android browser—each of which do not prevent // form submissions by default form.addEventListener( "submit", function( event ) { if ( !this.checkValidity() ) { event.preventDefault(); } }); var submitButton = form.querySelector( "button:not([type=button]), input[type=submit]" ); submitButton.addEventListener( "click", function( event ) { var invalidFields = form.querySelectorAll( ":invalid" ), errorMessages = form.querySelectorAll( ".error-message" ), parent; // Remove any existing messages for ( var i = 0; i < errorMessages.length; i++ ) { errorMessages[ i ].parentNode.removeChild( errorMessages[ i ] ); } for ( var i = 0; i < invalidFields.length; i++ ) { parent = invalidFields[ i ].parentNode; parent.insertAdjacentHTML( "beforeend", "" + invalidFields[ i ].validationMessage + "" ); } // If there are errors, give focus to the first invalid field if ( invalidFields.length > 0 ) { invalidFields[ 0 ].focus(); } }); } // Replace the validation UI for all forms var forms = document.querySelectorAll( "form" ); for ( var i = 0; i < forms.length; i++ ) { replaceValidationUI( forms[ i ] ); } Alternative UI #3: Replacement bubbles The last UI I’ll present is a way of mimicking the browser’s validation bubble with a completely custom (and styleable) bubble built with JavaScript. Here’s the implementation in action. In this example, I’m using a Kendo UI tooltip because I don’t want to worry about handling the positioning logic of the bubbles myself. The code I’m using to build this UI is below. For this implementation I chose to use jQuery to clean up the DOM code (as Kendo UI depends on jQuery). $( "form" ).each(function() { var form = this; // Suppress the default bubbles form.addEventListener( "invalid", function( event ) { event.preventDefault(); }, true ); // Support Safari, iOS Safari, and the Android browser—each of which do not prevent // form submissions by default $( form ).on( "submit", function( event ) { if ( !this.checkValidity() ) { event.preventDefault(); } }); $( "input, select, textarea", form ) // Destroy the tooltip on blur if the field contains valid data .on( "blur", function() { var field = $( this ); if ( field.data( "kendoTooltip" ) ) { if ( this.validity.valid ) { field.kendoTooltip( "destroy" ); } else { field.kendoTooltip( "hide" ); } } }) // Show the tooltip on focus .on( "focus", function() { var field = $( this ); if ( field.data( "kendoTooltip" ) ) { field.kendoTooltip( "show" ); } }); $( "button:not([type=button]), input[type=submit]", form ).on( "click", function( event ) { // Destroy any tooltips from previous runs $( "input, select, textarea", form ).each( function() { var field = $( this ); if ( field.data( "kendoTooltip" ) ) { field.kendoTooltip( "destroy" ); } }); // Add a tooltip to each invalid field var invalidFields = $( ":invalid", form ).each(function() { var field = $( this ).kendoTooltip({ content: function() { return field[ 0 ].validationMessage; } }); }); // If there are errors, give focus to the first invalid field invalidFields.first().trigger( "focus" ).eq( 0 ).focus(); }); }); Although replacing the validation bubbles requires an unfortunate amount of code, this does come fairly close to replicating the browser’s implementation. The difference being that the JavaScript implementation is far more customizable, as you can change it to your heart’s desire. For instance, if you need to add some pink, green, and Comic Sans to your bubbles, you totally can now: The Kendo UI tooltip widget is one of the 25+ widgets available in Kendo UI Core, the free and open source distribution of Kendo UI. So you can use this code today without worrying about licensing restrictions — or paying money. You candownload the Kendo UI core source directly, use our CDN, or grab the library from Bower (bower install kendo-ui-core). Wrapping Up Although you cannot style the browser’s validation bubbles, you can suppress them and build whatever UI you’d like. Feel free to try and alter the approaches shown in this article to meet your needs. If you have any other approaches you’ve used in the past feel free to share them in the comments.
October 15, 2014
by Burke Holland
· 12,691 Views
article thumbnail
MySQL Replication: 'Got fatal error 1236' Causes and Cures
Originally Written by Muhammad Irfan MySQL replication is a core process for maintaining multiple copies of data – and replication is a very important aspect in database administration. In order to synchronize data between master and slaves you need to make sure that data transfers smoothly, and to do so you need to act promptly regarding replication errors to continue data synchronization. Here on the Percona Support team, we often help customers with replication broken-related issues. In this post I’ll highlight the top most critical replication error code 1236 along with the causes and cure. MySQL replication error “Got fatal error 1236” can be triggered by multiple reasons and I will try to cover all of them. Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ‘log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event ‘binlog.000201′ at 5480571 This is a typical error on the slave(s) server. It reflects the problem around max_allowed_packet size. max_allowed_packet refers to single SQL statement sent to the MySQL server as binary log event from master to slave. This error usually occurs when you have a different size of max_allowed_packet on the master and slave (i.e. master max_allowed_packet size is greater then slave server). When the MySQL master server tries to send a bigger packet than defined on the slave server, the slave server then fails to accept it and hence the error. In order to alleviate this issue please make sure to have the same value for max_allowed_packet on both slave and master. You can read more about max_allowed_packet here. This error usually occurs when updating a huge number of rows on the master and it doesn’t fit into the value of slave max_allowed_packet size because slave max_allowed_packet size is lower then the master. This usually happens with queries “LOAD DATA INFILE” or “INSERT .. SELECT” queries. As per my experience, this can also be caused by application logic that can generate a huge INSERT with junk data. Take into account, that one new variable introduced in MySQL 5.6.6 and later slave_max_allowed_packet_size which controls the maximum packet size for the replication threads. It overrides the max_allowed_packet variable on slave and it’s default value is 1 GB. In this post, “max_allowed_packet and binary log corruption in MySQL,”my colleague Miguel Angel Nieto explains this error in detail. Got fatal error 1236 from master when reading data from binary log: ‘Could not find first log file name in binary log index file’ This error occurs when the slave server required binary log for replication no longer exists on the master database server. In one of the scenarios for this, your slave server is stopped for some reason for a few hours/days and when you resume replication on the slave it fails with above error. When you investigate you will find that the master server is no longer requesting binary logs which the slave server needs to pull in order to synchronize data. Possible reasons for this include the master server expired binary logs via system variable expire_logs_days – or someone manually deleted binary logs from master via PURGE BINARY LOGS command or via ‘rm -f’ command or may be you have some cronjob which archives older binary logs to claim disk space, etc. So, make sure you always have the required binary logs exists on the master server and you can update your procedures to keep binary logs that the slave server requires by monitoring the “Relay_master_log_file” variable from SHOW SLAVE STATUS output. Moreover, if you have set expire_log_days in my.cnf old binlogs expire automatically and are removed. This means when MySQL opens a new binlog file, it checks the older binlogs, and purges any that are older than the value of expire_logs_days (in days). Percona Server added a feature to expire logs based on total number of files used instead of the age of the binlog files. So in that configuration, if you get a spike of traffic, it could cause binlogs to disappear sooner than you expect. For more information check Restricting the number of binlog files. In order to resolve this problem, the only clean solution I can think of is to re-create the slave server from a master server backup or from other slave in replication topology. – Got fatal error 1236 from master when reading data from binary log: ‘binlog truncated in the middle of event; consider out of disk space on master; the first event ‘mysql-bin.000525′ at 175770780, the last event read from ‘/data/mysql/repl/mysql-bin.000525′ at 175770780, the last byte read from ‘/data/mysql/repl/mysql-bin.000525′ at 175771648.’ Usually, this caused by sync_binlog <>1 on the master server which means binary log events may not be synchronized on the disk. There might be a committed SQL statement or row change (depending on your replication format) on the master that did not make it to the slave because the event is truncated. The solution would be to move the slave thread to the next available binary log and initialize slave thread with the first available position on binary log as below: mysql>CHANGE MASTERTOMASTER_LOG_FILE='mysql-bin.000526',MASTER_LOG_POS=4; – [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Client requested master to start replication from impossible position; the first event ‘mysql-bin.010711′ at 55212580, the last event read from ‘/var/lib/mysql/log/mysql-bin.000711′ at 4, the last byte read from ‘/var/lib/mysql/log/mysql-bin.010711′ at 4.’, Error_code: 1236 I foresee master server crashed or rebooted and hence binary log events not synchronized on disk. This usually happens when sync_binlog != 1 on the master. You can investigate it as inspecting binary log contents as below: $mysqlbinlog--base64-output=decode-rows--verbose--verbose--start-position=55212580mysql-bin.010711 You will find this is the last position of binary log and end of binary log file. This issue can usually be fixed by moving the slave to the next binary log. In this case it would be: mysql>CHANGE MASTER TOMASTER_LOG_FILE='mysql-bin.000712',MASTER_LOG_POS=4; This will resume replication. To avoid corrupted binlogs on the master, enabling sync_binlog=1 on master helps in most cases. sync_binlog=1 will synchronize the binary log to disk after every commit. sync_binlog makes MySQL perform on fsync on the binary log in addition to the fsync by InnoDB. As a reminder, it has some cost impact as it will synchronize the write-to-binary log on disk after every commit. On the other hand, sync_binlog=1 overhead can be very minimal or negligible if the disk subsystem is SSD along with battery-backed cache (BBU). You can read more about this here in the manual. sync_binlog is a dynamic option that you can enable on the fly. Here’s how: mysql-master>SET GLOBAL sync_binlog=1; To make the change persistent across reboot, you can add this parameter in my.cnf. As a side note, along with replication fixes, it is always a better option to make sure your replica is in the master and to validate data between master/slaves. Fortunately, Percona Toolkit has tools for this purpose: pt-table-checksum & pt-table-sync. Before checking for replication consistency, be sure to check the replication environment and then, later, to sync any differences.
October 15, 2014
by Peter Zaitsev
· 40,028 Views
article thumbnail
Spring @Configuration - RabbitMQ Connectivity
I have been playing around with converting an application that I have to use Spring @Configuration mechanism to configure connectivity to RabbitMQ - originally I had the configuration described using an xml bean definition file. So this was my original configuration: This is a fairly simple configuration that : sets up a connection to a RabbitMQ server, creates a durable queue(if not available) creates a durable exchange and configures a binding to send messages to the exchange to be routed to the queue based on a routing key called "rube.key" This can be translated to the following @Configuration based java configuration: @Configuration public class RabbitConfig { @Autowired private ConnectionFactory rabbitConnectionFactory; @Bean DirectExchange rubeExchange() { return new DirectExchange("rmq.rube.exchange", true, false); } @Bean public Queue rubeQueue() { return new Queue("rmq.rube.queue", true); } @Bean Binding rubeExchangeBinding(DirectExchange rubeExchange, Queue rubeQueue) { return BindingBuilder.bind(rubeQueue).to(rubeExchange).with("rube.key"); } @Bean public RabbitTemplate rubeExchangeTemplate() { RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory); r.setExchange("rmq.rube.exchange"); r.setRoutingKey("rube.key"); r.setConnectionFactory(rabbitConnectionFactory); return r; } } This configuration should look much more simpler than the xml version of the configuration. I am cheating a little here though, you should be seeing a missing connectionFactory which is just being injected into this configuration, where is that coming from..this is actually part of a Spring Boot based application and there is a Spring Boot Auto configuration for RabbitMQ connectionFactory based on whether the RabbitMQ related libraries are present in the classpath. Here is the complete configuration if you are interested in exploring further - https://github.com/bijukunjummen/rg-si-rabbit/blob/master/src/main/java/rube/config/RabbitConfig.java References: Spring-AMQP project here Spring-Boot starter project using RabbitMQ here
October 14, 2014
by Biju Kunjummen
· 37,891 Views
article thumbnail
How to Code AngularJS Quickly with Sublime Text Editor
Sublime Text Editor can help you write in AngularJS much more quickly.
October 14, 2014
by Ajitesh Kumar
· 139,600 Views · 3 Likes
article thumbnail
Spring @Configuration and Injecting Bean Dependencies as Method Parameters
One of the ways Spring recommends injecting inter-dependencies between beans is shown in the following sample copied from the Spring's reference guide here: @Configuration public class AppConfig { @Bean public Foo foo() { return new Foo(bar()); } @Bean public Bar bar() { return new Bar("bar1"); } } So here, bean `foo` is being injected with a `bar` dependency. However, there is one alternate way to inject dependency that is not documented well, it is to just take the dependency as a `@Bean` method parameter this way: @Configuration public class AppConfig { @Bean public Foo foo(Bar bar) { return new Foo(bar); } @Bean public Bar bar() { return new Bar("bar1"); } } There is a catch here though, the injection is now by type, the `bar` dependency would be resolved by type first and if duplicates are found, then by name: @Configuration public static class AppConfig { @Bean public Foo foo(Bar bar1) { return new Foo(bar1); } @Bean public Bar bar1() { return new Bar("bar1"); } @Bean public Bar bar2() { return new Bar("bar2"); } } In the above sample dependency `bar1` will be correctly injected. If you want to be more explicit about it, an @Qualifer annotation can be added in: @Configuration public class AppConfig { @Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); } @Bean public Bar bar1() { return new Bar("bar1"); } @Bean public Bar bar2() { return new Bar("bar2"); } } So now the question of whether this is recommended at all, I would say yes for certain cases. For eg, had the bar bean been defined in a different @Configuration class , the way to inject the dependency then is along these lines: @Configuration public class AppConfig { @Autowired @Qualifier("bar1") private Bar bar1; @Bean public Foo foo() { return new Foo(bar1); } } I find the method parameter approach simpler here: @Configuration public class AppConfig { @Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); } }
October 14, 2014
by Biju Kunjummen
· 125,826 Views · 15 Likes
article thumbnail
jQuery Mobile Tutorial: User Registration, Login and Logout Screens for the Meeting Room Booking App
in this jquery mobile tutorial we will create the screens that will handle user registration, login and logout in a real-world meeting room booking application. this article is part of a series of mobile application development tutorials that i have been publishing on my blog jorgeramon.me. if you are new to this series, i recommend that you read its first part, as well as this mobile ui patterns article where i provide a flowchart describing the user registration, login and logout screens in a mobile application. we will use this chart as a guide for this article. here’s a screenshot: in this part of the tutorial we will only create the static html for the screens. in future articles we will implement the programming logic that makes the pages work. the first step we are going to take is to set up a jquery mobile project for the app. how to set up a jquery mobile project while you can use mobile sdks such as kendo ui mobile and intel xdk to create, debug and deploy jquery mobile apps, in this tutorial i will show you how to create a simple jquery mobile project without using the facilities provided by those sdks. i think that it’s important to understand how you can create this type of project from scratch, and how the different pieces in the project work together in an app. the project’s directories and files we need to pick a directory in our development workstations where we will place the project’s files. in my case i named that directory “apps”. in that directory, we will create a root directory for the application, which we will name “conf-rooms”. make sure that this directory is set up so it can be accessed from your local web server. under “conf-rooms” we will create a “css” directory, where we will place the css assets of the project; and an “img” directory for the images that we will use. at the same level of the “apps” directory, we will create a “lib” directory. this is where we will place the jquery mobile and any other libraries that our application will use. you also need to set up this directory so it can be accessed from your local web server. on my workstation the directories look as depicted below: now is a good time to download the jquery mobile and jquery libraries from their respective websites, and place them in the “jqm” and “jquery” directories, all under the “lib” directory. this is how the files look on my workstation: how jquery mobile works a short overview of jquery mobile for those who aren’t very familiar with it yet. as its documentation clearly explains, jquery mobile is a unified user interface system with the following characteristics: it works seamlessly across all popular mobile device platforms. it uses jquery and jquery ui as its foundations. it has a lightweight codebase built on progressive enhancement. it has a flexible and easily themeable design. an attribute that differentiates jquery mobile from other frameworks is that it targets a wide variety of mobile browsers. the reason this coverage is possible has to do with the way jquery mobile works. jquery mobile works by applying css and javascript enhancements to html pages built with clean, semantic html. the usage of semantic html ensures compatibility with most web-enabled devices. the techniques applied by the framework to an html page, transform the semantic page into a rich and interactive experience. we call these changes progressive enhancements, as they are applied progressively to the page, taking advantage of the capabilities of the browser on the web-enabled device. the enhancements result in pages that provide a great user experience on the latest mobile browsers and degrade gracefully on less capable browsers, without losing their intrinsic functionality. in addition, the framework provides support for screen readers and other assistive technologies through a tight integration with the web accessibility initiative – accessible rich internet applications suite (wai-aria) technical specification. creating the landing screen the first application that we will create is the landing screen. this screen will come up when users launch our application. as reflected in the flowchart at the beginning of this article, the landing screen is the door to all the areas of the app, and it requires that users log in before navigating any further. in the “wireframes for signing in and signing up” section of the third part of this tutorial we created the following mockup for this screen. let’s create an empty index.html file in the “conf-rooms” directory, and add a jquery mobile page template to the file as follows: book it welcome! existing users sign in don't have an account? sign up before we step through this code i want you to check out this file in a mobile browser or simulator. the result should look like this screenshot: what do you think? back in the index.html file, in the head section we have references to the jquery and jquery mobile libraries. double check that yours are pointing to the correct directories in your workstation. how to use a custom theme in jquery mobile now i want to direct your attention to the following lines: these lines mean that we are using a custom jquery mobile theme that resides in the “conf-room1.min.css” file. i created this file using jquery mobile’s theme roller . we will use this theme to give our app a look different than the standard jquery mobile themes. you can download the theme using this link . after downloading the zipped theme files, we will go to the “css” directory, create a “css/themes/1″ directory and place the unzipped theme files there. when done, the “css” directory should look like this: in the head section of the index.html file we also have this code: app.css is the css file where we will place any additional custom styles that we will use in the app. for the moment, we will add the following code to the “app.css” file: /* change html headers color */ h1,h2,h3,h4,h5 { color:#0071bc; } h2.mc-text-danger, h3.mc-text-danger { color:red; } /* change border radius of icon buttons */ .ui-btn-icon-notext.ui-corner-all { -webkit-border-radius: .3125em; border-radius: .3125em; } /* change color of jquery mobile page headers */ .ui-title { color:#fff; } /* center-aligned text */ .mc-text-center { text-align:center; } /* top margin for some elements */ .mc-top-margin-1-5 { margin-top:1.5em; } these are just a few cosmetic changes that will enhance the look of the app. notice that i prefixed non-jquerymobile classes with the characters “mc-” to avoid potential collisions with jquerymobile’s classes. the remaining lines in the head section of index.html are the references to the jquery and jquery mobile libraries. as i suggested earlier, make sure that yours are pointing to the correct directories in your project. let’s move on to the body section of the “index.html file”. there you will find the standard jquery mobile page template with a header and main divs: book it welcome! existing users sign in don't have an account? sign up we have decorated the “header” div with the data-theme=”c” attribute, which gives it the nice purple background color that we defined in the custom theme: in the “main” div we are using a couple of links to the sign in and sign up screens respectively. the links point to the sign-in.html and sing-up.html files that we will create in a few minutes. these links are decorated with the jquery mobile ui-btn, ui-btn-b and ui-corner-all classes, which make them look like buttons: this is all we need to do in the lading page for the moment. let’s move on to the log in screen. creating a log in screen with jquery mobile here’s the log in screen’s mockup that we built in the third part of this tutorial : we will use the log in screen to capture the user’s credentials and validate them against the application’s user accounts database. if validation succeeds, we will direct users to a “main menu” scree that we will create in an upcoming tutorial. let’s create an empty sign-in.html file in the project’s directory. in the file, we will write the following code: book it sign in email address password remember me submit can't access your account? login failed did you enter the right credentials? ok if you open this sign-in.html with a mobile browser or emulator, you will see something like this: the head section of the html document is similar to the index.html file we created a few minutes ago, with the exception of the document’s title. no need to explain much there. in the “main” section of the jquery mobile page that wee added to the file, we dropped a few controls that will allow us to capture the user’s email and password, along with a checkbox that will let us know when the user wants the app to remember their credentials: sign in email address password remember me submit can't access your account? we also added a link that will allow users to initiate the password reset process if they have problems logging in. you will also notice that the “submit” link points to the “dlg-invalid-credentials” anchor defined in the same jquery mobile page. this link is decorated with the data-rel=”popup”, data-transition=”pop” and data-position-to=”window” attributes. when we do this, we are telling jquery mobile to open the link to the element with id=”dlg-invalid-credentials” as a popup dialog, using a “pop” transition, and center the element relative to the document’s window. here’s the html for the popup: login failed did you enter the right credentials? ok notice that the “dlg-invalid-credentials” div is decorated with the data-rel=”popup” attribute, signaling to jquery mobile to apply popup styles to this div. if you click or tap the “submit” button, you will see the “invalid credentials” popup: one last thing on this screen. we have the popup linked directly to the “submit” button for testing purposes. in an upcoming part of this tutorial we will add programming logic that will activate the popup only when the login fails. for the moment we are only concerned with creating the html code for the pages and making sure that the jquery mobile enhancements work on them. creating an account locked screen with jquery mobile many business applications use an account locked feature as a measure to increase the app’s security. we will use this feature in our app, and this means that we need to create an account locked screen. the purpose of the screen is to notify the user that their account is locked. we will define under which conditions an account will be locked through programming logic that we will add in an upcoming chapter of this tutorial. let’s create an empty account-locked.html file, and drop the following code in it: back app name your account is locked please contact the helpdesk to resolve this issue. the file should look like the screenshot below when viewed with a mobile browser or emulator: the html code for this screen is very similar to that in the prior two screens, with the exception of one element that i want you to pay attention to: back app name the header section of the jquery mobile page we just created contains a link to the sign-in.html file. when we decorate it with the ui-btn-left, ui-btn, ui-btn-icon-notext, ui-corner-all and ui-icon-back classes, we are giving the link the appearance of a toolbar button, just like this: the data-rel=”back” attribute causes any taps on the anchor to mimic a “back button”, going back one history entry and ignoring the anchor’s default href. you can read more about navigation and linking on jquery mobile by visiting jquery mobile’s navigation documentation. you should also visit the jquery mobile buttons guide to learn about how to create buttons. creating a sign up page with jquery mobile when users tap on the landing screen’s “sign up” button, they will open the sign up screen. this is where we will capture the user’s personal information so we can create an account for them. remember that our mockup of the sign up screen looks like this: let’s create an empty sign-up.html file and add the following code to it: book it sign up first name last name email address password confirm password submit almost done... confirm your email address we sent you an email with instructions on how to confirm your email address. please check your inbox and follow the instructions in the email. ok the file should look as depicted below when you open it with a mobile browser or emulator: the only difference with the mockup is that we are using a “submit” button at the bottom of the screen, instead of a “done” button in the toolbar. when you examine the html code, you will find that the “submit” button is wired to the “dlg-sign-up-sent” popup: almost done... confirm your email address we sent you an email with instructions on how to confirm your email address. please check your inbox and follow the instructions in the email. ok if you tap on the button, the popup will become visible: we will use this popup to notify the users that we have sent them a message asking them to confirm their email address. the message will contain a link to a webpage where users will need to re-enter the email address used to create their account in the app. with this step we are trying to make sure that it was is a human with a valid email inbox who created the account. back in the popup’s html code, notice how the “ok” button links back to the sign in screen. you should be able to confirm that this link works when you tap the button. creating a password reset screen with jquery mobile the app’s landing screen has a “can’t access your account?” link that helps user initiate the password reset workflow of the app. the first step of this workflow is to present the “begin password reset” screen to the user. we will use this screen to capture the user’s email address. if we find this email address in the user accounts database of the app, we will email the user a provisional password. next, we will activate the “end password reset” screen, where the user will need to enter the provisional password and a new password of their choosing. the picture below illustrates this process. let’s create an empty begin-password-reset.html file in the project’s directory. we will write the following code int the file: book it password reset enter your email address submit password reset check your inbox we sent you an email with instructions on how to reset your password. please check your inbox and follow the instructions in the email. ok this is how the screen should look when viewed on a mobile browser or emulator: there is nothing new in the html code of this screen. we wired the “submit” button so when a user taps it, the embedded “dlg-pwd-reset-sent” popup will become active: we did this for testing purposes. remember that we will add the programming logic that activates these popups in upcoming chapters of this tutorial. when a user taps the popup’s “ok” button, the application will navigate to the “end password reset” screen, which we will create next. the end password reset screen this is the screen where the user will enter the provisional password we sent them via email, along with a new password of their choosing. to create this screen we will add an empty “end-password-reset.html” file to the project. here’s the code that goes in the file: book it reset password provisional password new password confirm new password submit done your password was changed. ok the screen should look like the picture below when viewed with a mobile browser or emulator: we wired the “submit” button so it activates the embedded “dlg-pwd-changed” popup: this popup simply tells the user that their password was changed. tapping the “ok” button will make the app navigate back to the sign in screen, where the user can sign in with the new password. summary and next steps this concludes our first phase of work on the user registration, login and logout screens of the jquery mobile version of the app. i will emphasize again that in this phase we are not adding programming logic to the screens. we are simply creating a jquery mobile page for each screen and making sure that the visual elements within the screens adhere to the mockups that we created in previous chapters of this tutorial, as well as to the ui patterns flowchart that i mentioned at the beginning of this article. while we’ve made significant progress with the app at this point, it’s fair to say that we are just getting started. we are still missing the programming for this article’s screens, as well as the jquery mobile pages and programming for the screens that will allow users to browse and reserve meeting rooms, which is why we created the app in the first place. in the next chapter of this tutorial we will get started with the programming of the user profile screens we just created. don’t forget to sign up for my mailing list so you can be among the first to know when i publish the next update. stay tuned don’t miss any articles. get new articles and updates sent free to your inbox.
October 13, 2014
by Jorge Ramon
· 78,392 Views · 4 Likes
article thumbnail
Neo4j: COLLECTing Multiple Values (Too Many Parameters for Function ‘Collect’)
One of my favourite functions in Neo4j’s cypher query language is COLLECT which allows us to group items into an array for later consumption. However, I’ve noticed that people sometimes have trouble working out how to collect multiple items with COLLECT and struggle to find a way to do so. Consider the following data set: create (p:Person {name: "Mark"}) create (e1:Event {name: "Event1", timestamp: 1234}) create (e2:Event {name: "Event2", timestamp: 4567}) create (p)-[:EVENT]->(e1) create (p)-[:EVENT]->(e2) If we wanted to return each person along with a collection of the event names they’d participated in we could write the following: $ MATCH (p:Person)-[:EVENT]->(e) > RETURN p, COLLECT(e.name); +--------------------------------------------+ | p | COLLECT(e.name) | +--------------------------------------------+ | Node[0]{name:"Mark"} | ["Event1","Event2"] | +--------------------------------------------+ 1 row That works nicely, but what about if we want to collect the event name and the timestamp but don’t want to return the entire event node? An approach I’ve seen a few people try during workshops is the following: MATCH (p:Person)-[:EVENT]->(e) RETURN p, COLLECT(e.name, e.timestamp) Unfortunately this doesn’t compile: SyntaxException: Too many parameters for function 'collect' (line 2, column 11) "RETURN p, COLLECT(e.name, e.timestamp)" ^ As the error message suggests, the COLLECT function only takes one argument so we need to find another way to solve our problem. One way is to put the two values into a literal array which will result in an array of arrays as our return result: $ MATCH (p:Person)-[:EVENT]->(e) > RETURN p, COLLECT([e.name, e.timestamp]); +----------------------------------------------------------+ | p | COLLECT([e.name, e.timestamp]) | +----------------------------------------------------------+ | Node[0]{name:"Mark"} | [["Event1",1234],["Event2",4567]] | +----------------------------------------------------------+ 1 row The annoying thing about this approach is that as you add more items you’ll forget in which position you’ve put each bit of data so I think a preferable approach is to collect a map of items instead: $ MATCH (p:Person)-[:EVENT]->(e) > RETURN p, COLLECT({eventName: e.name, eventTimestamp: e.timestamp}); +--------------------------------------------------------------------------------------------------------------------------+ | p | COLLECT({eventName: e.name, eventTimestamp: e.timestamp}) | +--------------------------------------------------------------------------------------------------------------------------+ | Node[0]{name:"Mark"} | [{eventName -> "Event1", eventTimestamp -> 1234},{eventName -> "Event2", eventTimestamp -> 4567}] | +--------------------------------------------------------------------------------------------------------------------------+ 1 row During the Clojure Neo4j Hackathon that we ran earlier this week this proved to be a particularly pleasing approach as we could easily destructure the collection of maps in our Clojure code.
October 13, 2014
by Mark Needham
· 9,980 Views
article thumbnail
Creating Executable Uber Jar’s and Native Applications with Java 8 and Maven
creating uber jar’s in java is nothing particular new, even creating executable jar’s was possible with maven long before java 8. with the first release of javafx 2, oracle introduced the javafxpackager tool, which has now been renamed to javapackager (java 8 u20). this enables developers to create native executables for any common platform, even mac app store packages; the drawbacks of the javapackager are that you must create the executable on the target platform and that it will than contain the whole jre to run the application. this means that your 100kb application gets more than 40mb, depending on your target platform. the first step on your way to an executable uber jar is to build your project and to collect all dependencies in a folder or a fat jar. this folder/fat jar will be the input for the javapackager tool, which creates the executable part. solution 1: the maven-dependency-plugin the maven-dependency-plugin contains the goal “unpack-dependencies”. we can use this goal to enhance the default “target/classes” folder with all the dependencies you need to run your application. we assume that the default maven build creates all classes of your project in the “target/classes” folder and the “unpack-dependencies” plugin copies all the project dependencies to this folder. the result is a valid input for the javapackager tool, which creates the executable. the “unpack-dependencies” goal is easy to use; you just need to set the output directory to the “target/classes” folder to ensure everything is together in one folder. a typical configuration looks like this: maven-dependency-plugin 2.6 unpack-dependencies package unpack-dependencies system org.springframework.jmx ${project.build.directory}/classes to create an executable jar from the target/classes directory we use the “exec-maven-plugin” to execute the javapackager commandline tool. org.codehaus.mojo exec-maven-plugin package-jar package exec ${env.java_home}/bin/javapackager -createjar -appclass ${app.main.class} -srcdir ${project.build.directory}/classes -outdir ./target -outfile ${project.artifactid}-app -v now we can create an executable jar, which contains all the project dependencies and that can simply be executed with “java –jar myapp.jar”. in the next step we want to create a native executable or an installer. specific configuration details for the javapackager can be found here: http://docs.oracle.com/javase/8/docs/technotes/tools/unix/javapackager.html , for this tutorial we assume that we want to create a native installer. to do so, we add a second “execution” to your “exec-maven-plugin” like this: package-jar2 package exec ${env.java_home}/bin/javapackager -deploy -native installer -appclass ${app.main.class} -srcfiles ${project.build.directory}/${artifactid}-app.jar -outdir ./target -outfile ${project.artifactid}-app -v once the configuration is done, you can run “mvn clean package” and you will find your executable jar as well as the native installer in your target folder. this solution works fine in most cases, but sometimes you can get in trouble with this plugin. when you have configuration files in your project that also exist in one of your dependencies, the unpack-dependency goal will overwrite your configuration file. for example your project contains a file like “meta-inf/service/com.myconf.file” and any dependency does contain the same file. in this case solution 2 may be a better approach. solution 2: the maven-shade plugin the maven-shade plugin provides the capability to package artifacts into an uber-jar, including its dependencies. it also provides various transformers to merge configuration files or to define the manifest of your jar file. this capability allows us to create an executable uber jar without involving the javapackager, so the packager is only needed to create the native executable. to build an executable uber jar following plugin configuration is needed: org.apache.maven.plugins maven-shade-plugin 2.3 package shade junit:junit jmock:* *:xml-apis … meta-inf/services/conf.file ${app.main.class} ${maven.compile.java.version} ${maven.compile.java.version} this configuration defines a main-class entry in the manifest and merges all “meta-inf/services/conf.files” together. the resulting executable jar file is now a valid input for the javapacker to create a native installer. the configuration to create a native installer with “exec-maven-plugin” and javapacker tool is exactly the same like in solution 1. i provided two example projects on github https://github.com/amoahcp/mvndemos where you can test both configurations. these are two simple projects with a main class starting a jetty webserver on port 8080, so the only dependency is jetty. both solutions can be used for any type of java projects, even with swing or javafx. reference: http://jacpfx.org/2014/10/08/uber-jars.html
October 12, 2014
by Andy Moncsek
· 29,167 Views
article thumbnail
Do it in Java 8: The State Monad
In a previous article (Do it in Java 8: Automatic memoization), I wrote about memoization and said that memoization is about handling state between function calls, although the value returned by the function does not change from one call to another as long as the argument is the same. I showed how this could be done automatically. There are however other cases when handling state between function calls is necessary but cannot be done this simple way. One is handling state between recursive calls. In such a situation, the function is called only once from the user point of view, but it is in fact called several times since it calls itself recursively. Most of these functions will not benefit internally from memoization. For example, the factorial function may be implemented recursively, and for one call to f(n), there will be n calls to the function, but not twice with the same value. So this function would not benefit from internal memoization. On the contrary, the Fibonacci function, if implemented according to its recursive definition, will call itself recursively a huge number of times will the same arguments. Here is the standard definition of the Fibonacci function: f(n) = f(n – 1) + f(n – 2) This definition has a major problem: calculating f(n) implies evaluating n² times the function with different values. The result is that, in practice, it is impossible to use this definition for n > 50 because the time of execution increases exponentially. But since we are calculating more than n values, it is obvious that some values are calculated several times. So this definition would be a good candidate for internal memoization. Warning: Java is not a true recursive language, so using any recursive method will eventually blow the stack, unless we use TCO (Tail Call Optimization) as described in my previous article (Do it in Java 8: recursive and corecursive Fibonacci). However, TCO can't be applied to the original Fibonacci definition because it is not tail recursive. So we will write an example working for n limited to a few thousands. The naive implementation Here is how we might implement the original definition: static BigInteger fib(BigInteger n) { return n.equals(BigInteger.ZERO) || n.equals(BigInteger.ONE) ? n : fib(n.subtract(BigInteger.ONE)).add(fib(n.subtract(BigInteger.ONE.add(BigInteger.ONE)))); } Do not try this implementation for values greater that 50. On my machine, fib(50) takes 44 minutes to return! Basic memoized version To avoid computing several times the same values, we can use a map were computed values are stored. This way, for each value, we first look into the map to see if it has already been computed. If it is present, we just retrieve it. Otherwise, we compute it, store it into the map and return it. For this, we will use a special class called Memo. This is a normal HashMap that mimic the interface of functional map, which means that insertion of a new (key, value) pair returns a map, and looking up a key returns an Optional: public class Memo extends HashMap { public Optional retrieve(BigInteger key) { return Optional.ofNullable(super.get(key)); } public Memo addEntry(BigInteger key, BigInteger value) { super.put(key, value); return this; } } Note that this is not a true functional (immutable and persistent) map, but this is not a problem since it will not be shared. We will also need a Tuple class that we define as: public class Tuple { public final A _1; public final B _2; public Tuple(A a, B b) { _1 = a; _2 = b; } } With this class, we can write the following basic implementation: static BigInteger fibMemo1(BigInteger n) { return fibMemo(n, new Memo().addEntry(BigInteger.ZERO, BigInteger.ZERO) .addEntry(BigInteger.ONE, BigInteger.ONE))._1; } static Tuple fibMemo(BigInteger n, Memo memo) { return memo.retrieve(n).map(x -> new Tuple<>(x, memo)).orElseGet(() -> { BigInteger x = fibMemo(n.subtract(BigInteger.ONE), memo)._1 .add(fibMemo(n.subtract(BigInteger.ONE).subtract(BigInteger.ONE), memo)._1); return new Tuple<>(x, memo.addEntry(n, x)); }); } This implementation works fine, provided values of n are not too big. For f(50), it returns in less that 1 millisecond, which should be compared to the 44 minutes of the naive version. Remark that we do not have to test for terminal values (f(0) and f(1). These values are simply inserted into the map at start. So, is there something we should make better? The main problem is that we have to handle the passing of the Memo map by hand. The signature of the fibMemo method is no longer fibMemo(BigInteger n), but fibMemo(BigInteger n, Memo memo). Could we simplify this? We might think about using automatic memoization as described in my previous article (Do it in Java 8: Automatic memoization ). However, this will not work: static Function fib = new Function() { @Override public BigInteger apply(BigInteger n) { return n.equals(BigInteger.ZERO) || n.equals(BigInteger.ONE) ? n : this.apply(n.subtract(BigInteger.ONE)).add(this.apply(n.subtract(BigInteger.ONE.add(BigInteger.ONE)))); } }; static Function fibm = Memoizer.memoize(fib); Beside the fact that we could not use lambdas because reference to this is not allowed, which may be worked around by using the original anonymous class syntax, the recursive call is made to the non memoized function, so it does not make things better. Using the State monad In this example, each computed value is strictly evaluated by the fibMemo method, and this is what makes the memo parameter necessary. Instead of a method returning a value, what we would need is a method returning a function that could be evaluated latter. This function would take a Memo as parameter, and this Memo instance would be necessary only at evaluation time. This is what the State monad will do. Java 8 does not provide the state monad, so we have to create it, but it is very simple. However, we first need an implementation of a list that is more functional that what Java offers. In a real case, we would use a true immutable and persistent List. The one I have written is about 1 000 lines, so I can't show it here. Instead, we will use a dummy functional list, backed by a java.util.ArrayList. Although this is less elegant, it does the same job: public class List { private java.util.List list = new ArrayList<>(); public static List empty() { return new List(); } @SafeVarargs public static List apply(T... ta) { List result = new List<>(); for (T t : ta) result.list.add(t); return result; } public List cons(T t) { List result = new List<>(); result.list.add(t); result.list.addAll(list); return result; } public U foldRight(U seed, Function> f) { U result = seed; for (int i = list.size() - 1; i >= 0; i--) { result = f.apply(list.get(i)).apply(result); } return result; } public List map(Function f) { List result = new List<>(); for (T t : list) { result.list.add(f.apply(t)); } return result; } public List filter(Function f) { List result = new List<>(); for (T t : list) { if (f.apply(t)) { result.list.add(t); } } return result; } public Optional findFirst() { return list.size() == 0 ? Optional.empty() : Optional.of(list.get(0)); } @Override public String toString() { StringBuilder s = new StringBuilder("["); for (T t : list) { s.append(t).append(", "); } return s.append("NIL]").toString(); } } The implementation is not functional, but the interface is! And although there are lots of missing capabilities, we have all we need. Now, we can write the state monad. It is often called simply State but I prefer to call it StateMonad in order to avoid confusion between the state and the monad: public class StateMonad { public final Function> runState; public StateMonad(Function> runState) { this.runState = runState; } public static StateMonad unit(A a) { return new StateMonad<>(s -> new StateTuple<>(a, s)); } public static StateMonad get() { return new StateMonad<>(s -> new StateTuple<>(s, s)); } public static StateMonad getState(Function f) { return new StateMonad<>(s -> new StateTuple<>(f.apply(s), s)); } public static StateMonad transition(Function f) { return new StateMonad<>(s -> new StateTuple<>(Nothing.instance, f.apply(s))); } public static StateMonad transition(Function f, A value) { return new StateMonad<>(s -> new StateTuple<>(value, f.apply(s))); } public static StateMonad> compose(List> fs) { return fs.foldRight(StateMonad.unit(List.empty()), f -> acc -> f.map2(acc, a -> b -> b.cons(a))); } public StateMonad flatMap(Function> f) { return new StateMonad<>(s -> { StateTuple temp = runState.apply(s); return f.apply(temp.value).runState.apply(temp.state); }); } public StateMonad map(Function f) { return flatMap(a -> StateMonad.unit(f.apply(a))); } public StateMonad map2(StateMonad sb, Function> f) { return flatMap(a -> sb.map(b -> f.apply(a).apply(b))); } public A eval(S s) { return runState.apply(s).value; } } This class is parameterized by two types: the value type A and the state type S. In our case, A will be BigInteger and S will be Memo. This class holds a function from a state to a tuple (value, state). This function is hold in the runState field. This is similar to the value hold in the Optional monad. To make it a monad, this class needs a unit method and a flatMap method. The unit method takes a value as parameter and returns a StateMonad. It could be implemented as a constructor. Here, it is a factory method. The flatMap method takes a function from A (a value) to StateMonad and return a new StateMonad. (In our case, A is the same as B.) The new type contains the new value and the new state that result from the application of the function. All other methods are convenience methods: map allows to bind a function from A to B instead of a function from A to StateMonad. It is implemented in terms of flatMap and unit. eval allows easy retrieval of the value hold by the StateMonad. getState allows creating a StateMonad from a function S -> A. transition takes a function from state to state and a value and returns a new StateMonad holding the value and the state resulting from the application of the function. In other words, it allows changing the state without changing the value. There is also another transition method taking only a function and returning a StateMonad. Nothing is a special class: public final class Nothing { public static final Nothing instance = new Nothing(); private Nothing() {} } This class could be replaced by Void, to mean that we do not care about the type. However, Void is not supposed to be instantiated, and the only reference of type Void is normally null. The problem is that null does not carry its type. We could instantiate a Void instance through introspection: Constructor constructor; constructor = Void.class.getDeclaredConstructor(); constructor.setAccessible(true); Void nothing = constructor.newInstance(); but this is really ugly, so we create a Nothing type with a single instance of it. This does the trick, although to be complete, Nothing should be able to replace any type (like null), which does not seem to be possible in Java. Using the StateMonad class, we can rewrite our program: static BigInteger fibMemo2(BigInteger n) { return fibMemo(n).eval(new Memo().addEntry(BigInteger.ZERO, BigInteger.ZERO).addEntry(BigInteger.ONE, BigInteger.ONE)); } static StateMonad fibMemo(BigInteger n) { return StateMonad.getState((Memo m) -> m.retrieve(n)) .flatMap(u -> u.map(StateMonad:: unit).orElse(fibMemo(n.subtract(BigInteger.ONE)) .flatMap(x -> fibMemo(n.subtract(BigInteger.ONE).subtract(BigInteger.ONE)) .map(x::add) .flatMap(z -> StateMonad.transition((Memo m) -> m.addEntry(n, z), z))))); } Now, the fibMemo method only takes a BigInteger as its parameter and returns a StateMonad, which means that when this method returns, nothing has been evaluated yet. The Memo doesn't even exist! To get the result, we may call the eval method, passing it the Memo instance. If you find this code difficult to understand, here is an exploded commented version using longer identifiers: static StateMonad fibMemo(BigInteger n) { /* * Create a function of type Memo -> Optional with a closure * over the n parameter. */ Function> retrieveValueFromMapIfPresent = (Memo memoizationMap) -> memoizationMap.retrieve(n); /* * Create a state from this function. */ StateMonad> initialState = StateMonad.getState(retrieveValueFromMapIfPresent); /* * Create a function for converting the value (BigInteger) into a State * Monad instance. This function will be bound to the Optional resulting * from the lookup into the map to give the result if the value was found. */ Function> createStateFromValue = StateMonad:: unit; /* * The value computation proper. This can't be easily decomposed because it * make heavy use of closures. It first calls recursively fibMemo(n - 1), * producing a StateMonad. It then flatMaps it to a new * recursive call to fibMemo(n - 2) (actually fibMemo(n - 1 - 1)) and get a * new StateMonad which is mapped to BigInteger addition * with the preceding value (x). Then it flatMaps it again with the function * y -> StateMonad.transition((Memo m) -> m.addEntry(n, z), z) which adds * the two values and returns a new StateMonad with the computed value added * to the map. */ StateMonad computedValue = fibMemo(n.subtract(BigInteger.ONE)) .flatMap(x -> fibMemo(n.subtract(BigInteger.ONE).subtract(BigInteger.ONE)) .map(x::add) .flatMap(z -> StateMonad.transition((Memo m) -> m.addEntry(n, z), z))); /* * Create a function taking an Optional as its parameter and * returning a state. This is the main function that returns the value in * the Optional if it is present and compute it and put it into the map * before returning it otherwise. */ Function, StateMonad> computeFiboValueIfAbsentFromMap = u -> u.map(createStateFromValue).orElse(computedValue); /* * Bind the computeFiboValueIfAbsentFromMap function to the initial State * and return the result. */ return initialState.flatMap(computeFiboValueIfAbsentFromMap); } The most important part is the following: StateMonad computedValue = fibMemo_(n.subtract(BigInteger.ONE)) .flatMap(x -> fibMemo_(n.subtract(BigInteger.ONE).subtract(BigInteger.ONE)) .map(x::add) .flatMap(z -> StateMonad.transition((Memo m) -> m.addEntry(n, z), z))); This kind of code is essential to functional programming, although it is sometimes replaced in other languages with “for comprehensions”. As Java 8 does not have for comprehensions we have to use this form. At this point, we have seen that using the state monad allows abstracting the handling of state. This technique can be used every time you have to handle state. More uses of the state monad The state monad may be used for many other cases were state must be maintained in a functional way. Most programs based upon maintaining state use a concept known as a State Machine. A state machine is defined by an initial state and a series of inputs. Each input submitted to the state machine will produce a new state by applying one of several possible transitions based upon a list of conditions concerning both the input and the actual state. If we take the example of a bank account, the initial state would be the initial balance of the account. Possible transition would be deposit(amount) and withdraw(amount). The conditions would be true for deposit and balance >= amount for withdraw. Given the state monad that we have implemented above, we could write a parameterized state machine: public class StateMachine { Function> function; public StateMachine(List, Transition>> transitions) { function = i -> StateMonad.transition(m -> Optional.of(new StateTuple<>(i, m)).flatMap((StateTuple t) -> transitions.filter((Tuple, Transition> x) -> x._1.test(t)).findFirst().map((Tuple, Transition> y) -> y._2.apply(t))).get()); } public StateMonad process(List inputs) { List> a = inputs.map(function); StateMonad> b = StateMonad.compose(a); return b.flatMap(x -> StateMonad.get()); } } This machine uses a bunch of helper classes. First, the inputs are represented by an interface: public interface Input { boolean isDeposit(); boolean isWithdraw(); int getAmount(); } There are two instances of inputs: public class Deposit implements Input { private final int amount; public Deposit(int amount) { super(); this.amount = amount; } @Override public boolean isDeposit() { return true; } @Override public boolean isWithdraw() { return false; } @Override public int getAmount() { return this.amount; } } public class Withdraw implements Input { private final int amount; public Withdraw(int amount) { super(); this.amount = amount; } @Override public boolean isDeposit() { return false; } @Override public boolean isWithdraw() { return true; } @Override public int getAmount() { return this.amount; } } Then come two functional interfaces for conditions and transitions: public interface Condition extends Predicate> {} public interface Transition extends Function, S> {} These act as type aliases in order to simplify the code. We could have used the predicate and the function directly. In the same manner, we use a StateTuple class instead of a normal tuple: public class StateTuple { public final A value; public final S state; public StateTuple(A a, S s) { value = Objects.requireNonNull(a); state = Objects.requireNonNull(s); } } This is exactly the same as an ordinary tuple with named members instead of numbered ones. Numbered members allows using the same class everywhere, but a specific class like this one make the code easier to read as we will see. The last utility class is Outcome, which represent the result returned by the state machine: public class Outcome { public final Integer account; public final List> operations; public Outcome(Integer account, List> operations) { super(); this.account = account; this.operations = operations; } public String toString() { return "(" + account.toString() + "," + operations.toString() + ")"; } } This again could be replaced with a Tuple>>, but using named parameters make the code easier to read. (In some functional languages, we could use type aliases for this.) Here, we use an Either class, which is another kind of monad that Java does not offer. I will not show the complete class, but only the parts that are useful for this example: public interface Either { boolean isLeft(); boolean isRight(); A getLeft(); B getRight(); static Either right(B value) { return new Right<>(value); } static Either left(A value) { return new Left<>(value); } public class Left implements Either { private final A left; private Left(A left) { super(); this.left = left; } @Override public boolean isLeft() { return true; } @Override public boolean isRight() { return false; } @Override public A getLeft() { return this.left; } @Override public B getRight() { throw new IllegalStateException("getRight() called on Left value"); } @Override public String toString() { return left.toString(); } } public class Right implements Either { private final B right; private Right(B right) { super(); this.right = right; } @Override public boolean isLeft() { return false; } @Override public boolean isRight() { return true; } @Override public A getLeft() { throw new IllegalStateException("getLeft() called on Right value"); } @Override public B getRight() { return this.right; } @Override public String toString() { return right.toString(); } } } This implementation is missing a flatMap method, but we will not need it. The Either class is somewhat like the Optional Java class in that it may be used to represent the result of an evaluation that may return a value or something else like an exception, an error message or whatever. What is important is that it can hold one of two things of different types. We now have all we need to use our state machine: public class Account { public static StateMachine createMachine() { Condition predicate1 = t -> t.value.isDeposit(); Transition transition1 = t -> new Outcome(t.state.account + t.value.getAmount(), t.state.operations.cons(Either.right(t.value.getAmount()))); Condition predicate2 = t -> t.value.isWithdraw() && t.state.account >= t.value.getAmount(); Transition transition2 = t -> new Outcome(t.state.account - t.value.getAmount(), t.state.operations.cons(Either.right(- t.value.getAmount()))); Condition predicate3 = t -> true; Transition transition3 = t -> new Outcome(t.state.account, t.state.operations.cons(Either.left(new IllegalStateException(String.format("Can't withdraw %s because balance is only %s", t.value.getAmount(), t.state.account))))); List, Transition>> transitions = List.apply( new Tuple<>(predicate1, transition1), new Tuple<>(predicate2, transition2), new Tuple<>(predicate3, transition3)); return new StateMachine<>(transitions); } } This could not be simpler. We just define each possible condition and the corresponding transition, and then build a list of tuples (Condition, Transition) that is used to instantiate the state machine. There are however to rules that must be enforced: Conditions must be put in the right order, with the more specific first and the more general last. We must be careful to be sure to match all possible cases. Otherwise, we will get an exception. At this stage, nothing has been evaluated. We did not even use the initial state! To run the state machine, we must create a list of inputs and feed it in the machine, for example: List inputs = List.apply( new Deposit(100), new Withdraw(50), new Withdraw(150), new Deposit(200), new Withdraw(150)); StateMonad = Account.createMachine().process(inputs); Again, nothing has been evaluated yet. To get the result, we just evaluate the result, using an initial state: Outcome outcome = state.eval(new Outcome(0, List.empty())) If we run the program with the list above, and call toString() on the resulting outcome (we can't do more useful things since the Either class is so minimal!) we get the following result: // // (100,[-150, 200, java.lang.IllegalStateException: Can't withdraw 150 because balance is only 50, -50, 100, NIL]) This is a tuple of the resulting balance for the account (100) and the list of operations that have been carried on. We can see that successful operations are represented by a signed integer, and failed operations are represented by an error message. This of course is a very minimal example, and as usual, one may think it would be much easier to do it the imperative way. However, think of a more complex example, like a text parser. All there is to do to adapt the state machine is to define the state representation (the Outcome class), define the possible inputs and create the list of (Condition,Transition). Going the functional way does not make the whole thing simpler. However, it allows abstracting the implementation of the state machine from the requirements. The only thing we have to do to create a new state machine is to write the new requirements!
October 9, 2014
by Pierre-Yves Saumont
· 25,552 Views · 7 Likes
article thumbnail
Spring Integration with JMS and Map Transformers
in this article i explained how spring built-in transformers works for while transforming object message to map message. sometimes the messages need to be transformed before they can be consumed to achieve a business purpose. for example, a producer uses a plain xml as its payload to produce a message, while a consumer is interested in java object or types like plain text ,name-value pairs, or json model. spring integration provides endpoints such as service activators, channel adapters, message bridges, gateways, transformers, filters, and routers. in this example how transformers endpoint transform object message to map message. references: spring integration spring with jms spring with junit mockrunner sts high level view spring-mockrunner.xml in spring-mockrunner.xml file, i defined mockqueue, mockqueueconnectionfactory for inbound queue, and outbound queue for quick testing purpose. inboundqueue is where you will publish object message from objecttomaptransformertest.java class. outboundqueue where this queue expecting mapmessage type object and this queue is listing mapmessagelistener.java class. for more information mockrunner works please check my previous article mockrunner with spring jms . pom.xml 4.0.0 org.springframework.samples spring-int-jms-basic 0.0.1-snapshot 1.6 utf-8 utf-8 3.2.3.release 1.0.13 1.7.5 4.11 org.springframework spring-context ${spring-framework.version} org.springframework spring-tx ${spring-framework.version} org.springframework.integration spring-integration-core 2.2.4.release org.springframework.integration spring-integration-jmx 2.2.4.release org.springframework.integration spring-integration-jms 2.2.4.release org.slf4j slf4j-api ${slf4j.version} compile ch.qos.logback logback-classic ${logback.version} runtime org.springframework spring-test ${spring-framework.version} test junit junit ${junit.version} test com.mockrunner mockrunner-jms 1.0.3 javax.jms jms 1.1 org.codehaus.jackson jackson-mapper-asl 1.9.3 compile spring-int-jms.xml the endpoint is configured to connect to a jms server, fetch the messages,and publish them onto a local channel i.e inputchannel. where as connection-factory, and destination referred mockqueueconnectionfactory, and mockqueue(inboundqueue) beans from spring-mockrunner.xml file. inputchannel and outputchannel defined as queue channel objecttomaptransformer: object-to-map-transformer element that takes the payload from the input channel original here mockrunner-in-queue object message and emits a name-value paired map object onto the output channel i.e outputchannel and outboundjmsadapter bean fetch this message and publish to queue i.e mockrunner-out-queue. inboundjmsadapter : inbound-channel-adapter bean is responsible for receiving messages from a jms server here it is reading from mock queue name mockrunner-in-queue see objecttomaptransformertest.java class. outboundjmsadapter : outbound-channel-adapter bean is responsible to fetch messages from the channel i.e outputchannel and publish them to jms queue or topic. in this outbounjmsadapter reading message outputchannel as mapmessage and publish to outboundqueue(mockrunner-out-queue). mapmessagelistener.java package com.spijb.listener; import javax.jms.jmsexception; import javax.jms.mapmessage; import javax.jms.session; import org.slf4j.logger; import org.slf4j.loggerfactory; import org.springframework.jms.listener.sessionawaremessagelistener; public class mapmessagelistener implements sessionawaremessagelistener { private static final logger log = loggerfactory.getlogger(mapmessagelistener.class); @override public void onmessage(mapmessage message, session session) throws jmsexception { log.info("message received \r\n"+message); } } it is plain mapmessagelistener class to print received message from queue. department.java package com.spijb.domain; import java.io.serializable; public class department implements serializable{ private static final long serialversionuid = 1l; private final integer deptno; private final string name; private final string location; public department() { deptno=10; name="sales"; location="tx"; } public department(integer dno,string name,string loc) { this.deptno=dno; this.name=name; this.location=loc; } public integer getdeptno() { return deptno; } public string getname() { return name; } public string getlocation() { return location; } @override public string tostring() { return this.deptno+"-> "+this.name+"->"+this.location; } } domain object to send as a message, by default constructor assign deptno 10 , name as sales, location as tx also provide parameter constructor. spring junit class objecttomaptransformertest.java package com.spijb.invoker; import javax.jms.jmsexception; import javax.jms.message; import javax.jms.objectmessage; import javax.jms.session; import org.junit.test; import org.junit.runner.runwith; import org.springframework.beans.factory.annotation.autowired; import org.springframework.jms.core.jmstemplate; import org.springframework.jms.core.messagecreator; import org.springframework.test.context.contextconfiguration; import org.springframework.test.context.junit4.springjunit4classrunner; import com.mockrunner.mock.jms.mockqueue; import com.spijb.domain.department; @runwith(springjunit4classrunner.class) @contextconfiguration({"classpath:spring-mockrunner.xml","classpath:spring-int-jms.xml"}) public class objecttomaptransformertest { @autowired private jmstemplate jmstemplate; @autowired private mockqueue inboundqueue; @test public void shouldsendmessage() throws interruptedexception { final department defaultdepartment = new department(); jmstemplate.send(inboundqueue,new messagecreator() { @override public message createmessage(session session) throws jmsexception { objectmessage objectmessage = session.createobjectmessage(); objectmessage.setobject(defaultdepartment); return objectmessage; } }); thread.sleep(5000); } } spring with junit class where you can send message to inputchannel i.e inboundqueue using mockrunner. output : info: started inboundjmsadapter oct 06, 2014 1:24:25 pm org.springframework.integration.endpoint.abstractendpoint start info: started org.springframework.integration.config.consumerendpointfactorybean#1 13:24:26.882 [org.springframework.jms.listener.defaultmessagelistenercontainer#0-1] info c.spijb.listener.mapmessagelistener - message received com.mockrunner.mock.jms.mockmapmessage: {location=tx, name=sales, deptno=10} oct 06, 2014 1:24:30 pm org.springframework.context.support.abstractapplicationcontext doclose info: closing org.springframework.context.support.genericapplicationcontext@5840979b: startup date [mon oct 06 13:24:25 cdt 2014]; root of context hierarchy oct 06, 2014 1:24:30 pm org.springframework.context.support.defaultlifecycleprocessor$lifecyclegroup stop info: stopping beans in phase 2147483647 in the above highlighted one is output as map.
October 9, 2014
by Upender Chinthala
· 22,735 Views
  • Previous
  • ...
  • 748
  • 749
  • 750
  • 751
  • 752
  • 753
  • 754
  • 755
  • 756
  • 757
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: