DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

The Latest Coding Topics

article thumbnail
Generating CSV-files on .NET
I have project where I need to output some reports as CSV-files. I found a good library called CsvHelper from NuGet and it works perfect for me. After some playing with it I was able to generate CSV-files that were shown correctly in Excel. Here is some sample code and also extensions that make it easier to work with DataTables. Simple report Here’s the simple fragment of code that illustrates how to use CsvHelper. using (var writer = new StreamWriter(Response.OutputStream)) using (var csvWriter = new CsvWriter(writer)) { csvWriter.Configuration.Delimiter = ";"; csvWriter.WriteField("Task No"); csvWriter.WriteField("Customer"); csvWriter.WriteField("Title"); csvWriter.WriteField("Manager"); csvWriter.NextRecord(); foreach (var project in data) { csvWriter.WriteField(project.Code); csvWriter.WriteField(project.CustomerName); csvWriter.WriteField(project.Name); csvWriter.WriteField(project.ProjectManagerName); csvWriter.NextRecord(); } } Of course, you can use other methods to output whole object or object list with one shot. I just needed here custom headers that doesn’t match property names 1:1. Generic helper for DataTable Some of my projects come from service layer as DataTable. I don’t want to add new models or Data Transfer Objects (DTO) with no good reason and DataTable is actually flexible enough if you need to add new fields to report and you want to do it fast. As DataTables are not supported by default (yet?), I wrote simple extension methods that work on DataTable views. When called on DataTable it selects default view automatically. The idea is – you can set filter on default data view and leave out the rows you don’t need. If you just want to show DataTable to screen as table then check out my posting Simple view to display contents of DataTable. public static class CsvHelperExtensions { public static void WriteDataTable(this CsvWriter csvWriter, DataTable table) { WriteDataView(csvWriter, table.DefaultView); } public static void WriteDataView(this CsvWriter csvWriter, DataView view) { foreach (DataColumn col in view.Table.Columns) { csvWriter.WriteField(col.ColumnName); } csvWriter.NextRecord(); foreach (DataRowView row in view) { foreach (DataColumn col in view.Table.Columns) { csvWriter.WriteField(row[col.ColumnName]); } csvWriter.NextRecord(); } } } And here is simple MVC controller action that gets data as DataTable and returns it as CSV-file. The result is CSV-file that opens correctly in Excel. [HttpPost] public void ExportIncomesReport() { var data = // Get DataTable here Response.ContentType = "text/csv"; Response.AddHeader("Content-disposition", "attachment;filename=IncomesReport.csv"); var preamble = Encoding.UTF8.GetPreamble(); Response.OutputStream.Write(preamble, 0, preamble.Length); using (var writer = new StreamWriter(Response.OutputStream)) using (var csvWriter = new CsvWriter(writer)) { csvWriter.Configuration.Delimiter = ";"; csvWriter.WriteDataTable(data); } } One thing to notice – with CsvHelper we have full control over a stream where we write data and this way we can write more performant code. Related Posts .Net Framework 4.0: string.IsNullOrWhiteSpace() method Exporting GridView Data to Excel Code Contracts: Hiding ContractException How to dump object properties My object to object mapper source released The post Generating CSV-files on .NET appeared first on Gunnar Peipman - Programming Blog.
June 26, 2015
by Gunnar Peipman
· 4,465 Views · 1 Like
article thumbnail
What is ASP.NET console application?
One mystery in ASP.NET 5 that people are asking me about are ASP.NET 5 console applications. We have web applications running on some web server – what is the point of new type of command-line applications that refer by name to web framework? Here’s my explanation. What we have with ASP.NET 5? CoreCLR – minimal subset of .NET Framework that is ~11MB in size and that supports true side-by-side execution. Yes, your application can specify exact version of CLR it needs to run and it doesn’t conflict with another versions of CLR on same box. DNX runtime – formerly named as K runtime, console based runtime to manage CLR versions, restore packages and run commands that our application defines. Framework level dependency injection – this is something we don’t have with classic console applications that have static entry point but we have it with ASP.NET console applications (they have also method Main but it’s not static). More independence from Visual Studio – it’s easier to build and run applications in build and continuous integration servers as there’s no need (or less need) for Visual Studio and its components. Applications can define their commands for different things like generating EF migrations and running unit tests. Also ASP.NET 5 is more independent from IIS and can be hosted by way smaller servers. Microsoft provides with ASP.NET 5 simple web listener server and new server called Kestrel that is based on libuv and can be used also on Unix-based environments. Application commands Your application can define commands that DNX runtime is able to read from your application configuration file. All these commands are actually ASP.NET console applications that run on command-line with no need for Visual Studio intsalled on your box. When you run command using DNX then DNX is creating instance of class and it looks for method Main(). I come back to those commands in future posts. Framework level dependency injection What we don’t have with classic console applications is framework-level dependency injection. I thinks it’s not easy to implement it when application is actually a class with one static entry point. ASP.NET console applications can be more aware of technical environment around them by supporting dependency injection. Alse we can take our console program to all environments where CoreCLR is running and we don’t have to worry about platform. New environments Speculation alert! All ideas in this little chapter are pure speculation and I have no public or secret sources to refer. This is just what I see that possibly comes in near future. But I’’m not a successful sidekick. CoreCLR can take our ASP.NET applications to different new environments. On Azure cloud we will possibly see that Webjobs can be built as ASP.NET console applications and we can host them with web applications built for CoreCLR. As CoreCLR is very small – remember, just 11MB – I’m almost sure that ASP.NET 5 and console applications will find their way to small devices like RaspberryPi, routers, wearables and so on. It’s possible we don’t need web server support in those environments but we still want use CoreCLR from console. Maybe this market is not big today but it will be huge tomorrow. Wrapping up Although the name “ASP.NET console application” is little confusing we can think of those applications as console applications for DNX. Currently the main usage for those applications are ASP.NET 5 commands but by my speculations we will see much more scenarios for those applications in near future. Related Posts ASP.NET MVC 3: Using controllers scaffolding Visual Studio 2010: Web.config transforms Creating gadget-like blocks for Windows Home Server 2011 web add-in user interface ASP.NET MVC 3: Using multiple view engines in same project Starting with ASP.NET MVC The post What is ASP.NET console application? appeared first on Gunnar Peipman - Programming Blog.
June 26, 2015
by Gunnar Peipman
· 3,201 Views
article thumbnail
How to Debug Your Maven Build with Eclipse
When running a Maven build with many plugins (e.g. the jOOQ or Flyway plugins), you may want to have a closer look under the hood to see what’s going on internally in those plugins, or in your extensions of those plugins. This may not appear obvious when you’re running Maven from the command line, e.g. via: C:\Users\jOOQ\workspace>mvn clean install Luckily, it is rather easy to debug Maven. In order to do so, just create the following batch file on Windows: @ECHO OFF IF "%1" == "off" ( SET MAVEN_OPTS= ) ELSE ( SET MAVEN_OPTS=-Xdebug -Xnoagent -Djava.compile=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 ) Of course, you can do the same also on a MacOS X or Linux box, by usingexport intead of SET. Now, run the above batch file and proceed again with building: C:\Users\jOOQ\workspace>mvn_debug C:\Users\jOOQ\workspace>mvn clean install Listening for transport dt_socket at address: 5005 Your Maven build will now wait for a debugger client to connect to your JVM on port 5005 (change to any other suitable port). We’ll do that now with Eclipse. Just add a new Remote Java Application that connects on a socket, and hit “Debug”: That’s it. We can now set breakpoints and debug through our Maven process like through any other similar kind of server process. Of course, things work exactly the same way with IntelliJ or NetBeans. Once you’re done debugging your Maven process, simply call the batch again with parameter off: C:\Users\jOOQ\workspace>mvn_debug off C:\Users\jOOQ\workspace>mvn clean install And your Maven builds will no longer be debugged. Happy debugging!
June 25, 2015
by Lukas Eder
· 25,037 Views
article thumbnail
Quick Tip: Exception Handling in Message Driven Beans
Let’s do a quick review of exception handling with regards to Message Driven Beans. The entry point into a MDB is the overridden onMessage method. It does not provide any scope for throwing checked exceptions and as a result, you will need to propagate unchecked exceptions (subclass of java.lang.RuntimeException) from your code if you want to handle error scenarios. Types of exceptions There are two categories of exceptions defined by the EJB specification and the container differentiates one from the other based on well stated semantics (again, in the EJB specification). Application Exception If you throw a checked exception (not possible for MDB but other EJBs can use this) which is not a java.rmi.RemoteException or it’s subclass, OR a RuntimeException (unchecked) which is annotated with @javax.ejb.ApplicationException, the container treats this as an Application Exception. As a result, it rolls back transaction if specified by the@javax.ejb.ApplicationException rollback attribute and retains the MDB instance for reuse – this is extremely important to note. @ApplicationException(rollback = true) public class InvalidCustomerIDException extends RuntimeException { public InvalidCustomerIDException(){ super(); } } System Exception If you throw a java.rmi.RemoteException (a checked exception) or it’s subclass, OR a RuntimeException (unchecked) which is not annotated with@javax.ejb.ApplicationException, the container treats it as a System Exception. As a result, it executes certain operations like transaction rollback and discards the MDB instance (this is critical). public class SystemExceptionExample extends Exception { public SystemExceptionExample(){ super(); } } What about the critical part ?? It is important to take into account, the discarding of the MDB instance. In case of System Exceptions, the container always discards the instance – so make sure that you are using these exceptions for their intended reason. In case you are using Application Exceptions and they are unchecked ones (they have to be in case of MDBs), make sure you annotate them with @javax.ejb.ApplicationException – this will ensure that the MDB instance itself is not discarded. Under heavy loads, you would want to have as many MDBs in the pool as possible and you would want to avoid MDB instances being moved out of service. Sensible exception handling can help you realize this goal. It’as simple as annotating your exception class with@javax.ejb.ApplicationException and leaving the rest to the container :-) References The EJB (3.2) specification is a 465 page PDF which might look intimidating at the outset, but it’s a great resource nonetheless and not that hard to grasp. In case you want to understand Exception Handling semantics in further detail, please do check out Chapter 9which is dedicated to this topic Cheers!
June 25, 2015
by Abhishek Gupta DZone Core CORE
· 2,688 Views · 1 Like
article thumbnail
Spring Integration Kafka 1.2 is Available, With 0.8.2 Support and Performance Enhancements
Spring Integration Kafka 1.2 is out with a major performance overhaul.
June 25, 2015
by Pieter Humphrey DZone Core CORE
· 2,638 Views
article thumbnail
What's Coming With JSF 2.3?
There seems to be a good deal of excitement in the Java EE community around the new MVC specification. This is certainly great and most understandable. Some (perhaps more established) parts of the Java EE community has in the meanwhile been more quietly contributing to the continuing evolution of JSF 2.3. So what is in JSF 2.3? The real answer is that it depends on what the JSF community needs. There is a small raft of work that's on the table now, but I think the JSF community should be very proactive in helping determining what needs to be done to keep the JSF community strong for years to come. Just as he did for JSF 2.2, Java EE community advocate Arjan Tijms has started maintaining a regularly updated blog entry listing the things the JSF 2.3 expert group is working on. So far he has detailed CDI injection improvements, the newly added post render view event, improved collections support and a few others. You should definitely check it out as a JSF developer and provide your input. Arjan also has an excellent collection of Java EE 8 blog entries generally onzeef.com. On a related note, JSF specification lead Ed Burns wrote up a very interesting recent blog entryoutlining the continuing momentum behind the strong JSF ecosystem. He highlighted a couple of brand new JSF plugins that we will explore in depth in future entries.
June 25, 2015
by Reza Rahman DZone Core CORE
· 4,321 Views · 2 Likes
article thumbnail
7 Things I Didn’t Expect to Hear at Gartner’s IT Ops Summit
Last week’s Gartner IT Operations Strategies & Solutions Summit in Orlando, Fla., was exactly what you’d expect—a place to talk about the IT operations issues impacting some of the largest companies in the world. Even so, there were a few interesting surprises. Among them: 1. Bi-modal is big. Not everyone will succeed. Gartner continued to tell its customers to employ two modes of IT—a traditional, slower moving capability for older, typically internal systems of record; and a high-speed, experimental one for new, typically customer-facing Web and mobile apps. “This is a time of experimentation and innovation,” said Gartner VP and distinguished analyst Chris Howard in his opening keynote. Organizations can’t ignore that there are multiple speeds and they should participate in all. Gartner managing VPRonni Colville added that by 2017, 75% of IT orgs will have this “bi-modal” IT capability. See also: Bi-Modal IT: Gartner Endorses Both Disruptive and Conservative Approaches to Technology However, “50% will make a mess of it,” Colville said. Why? Not necessarily because of technology failings, but more often because of a lack of people skills. 2. IT success is all about people. Donna Scott, also a Gartner VP and distinguished analyst, told her keynote audience that “you will be judged on agility, speed, and innovation.” However, the biggest problems Gartner sees for infrastructure and operations team engagement and innovation are lack of time, company culture that’s not conducive to these approaches, and a lack of business skills in IT. More than half of the people responding to an in-room poll said “people” are the part of IT ops that must change first. Not technology. Gartner research director George Spafford underscored similar issues in large organizations trying to use DevOps at scale: people and “human factors” are the biggest concerns from his in-room poll. All these probably contributed to hiring best-selling author Daniel Pink as a keynote speaker on the opening day of the conference. His focus? Not IT or architecture. Instead, he pounded home the importance of influencing people and selling internally. 3. Big orgs are trying DevOps. But the issues are different at scale. In numerous sessions I saw many hands go up when analysts asked, “Who here is trying DevOps?” Clearly, the approach is getting traction in large companies. But there’s lots of learning still to do. In fact, that was Spafford’s biggest bit of advice. “Always be learning,” he said, “trying to see what works and what breaks, especially at scale.” And, even once you’ve had some initial success, keep learning. “If you’ve done ‪DevOps, stay humble,” he advised. 4. Looking to innovative organizations for ideas … analytics on the rise. Many sessions addressed how large organizations are taking on ideas fostered by smaller, more risk-tolerant companies, and offered advice for doing so successfully. In addition to multiple discussions of DevOps, an entire session was devoted to establishing your own “Genius Bar®—a “walk-up IT support center” as explained in this CIO article. As at previous conferences, Gartner research VP Cameron Haight ran several sessions on lessons learned from firms running massive, Web-scale IT systems. “You need lots of data … and access to it inexpensively,” he said. Some commercial monitoring companies (New Relic included!) got a shout out for taking the lessons of Web scale IT to heart in their offerings. In addition, Haight said, “Analytics are increasingly important for application performance monitoring given the huge amount of data now available.” 5. Cloud: Enterprises want it, but aren’t very good at it yet. Gartner research director Dennis Smith talked through the enterprise’s interest in cloud computing. A huge majority of his in-room poll wanted some mix of both public and private cloud, while only 9% wanted to use only a private cloud environment and a measly 4% were looking to move entirely to the public cloud. The most popular choice (41%) was an 80/20 split between private and public cloud infrastructure. “Enterprises don’t make the dean’s list,” for cloud usage, Smith said, earning no more than a C average in his opinion. Large organizations are doing well at visibility, governance, and delivering standardized stacks, he said, but are less skilled at optimizing for these new environments. Still, Smith said the trends point toward enterprises improving on all fronts. 6. Cloud security can be better than yours. Importantly, Gartner VP and distinguished analyst Neil MacDonald gave the cloud a vote of confidence: noting that, for a variety of reasons, “Well-managed public cloud can be more secure than your own data center.” For example, on-premise software can pose serious security risks, he said, because of “deployment lag” where customers are stuck using software releases with unpatched security vulnerabilities. With a cloud-based Software-as-a-Service (SaaS), security updates can be more quickly rolled out to all customers. But cloud security can be different, requiring a shift to information-level security from OS-level security. Best practices include doing away with a huge pool of all-powerful sysadmins in favor of JEA, or “just enough administration,” where sysadmins have just enough privileges to do their job, and no more. An analogous security practice for compute resources is “least privilege,” where apps and microservices can’t talk to each other unless they specifically need to do so. Audience polling supported MacDonald’s optimistic view of cloud security, which suggests that large enterprises may struggle less with their cloud policies moving forward. 7. Containers: Try ’em! Ahead of this week’s DockerCon in San Francisco, Gartner devoted significant airtime to educating the audience on containers and microservices. My summary of ‪Gartner VP and distinguished analyst Tom Bittman’s advice on containers was simple: Try ’em. Now. Complement them with VMs. ‪And Docker (the company) is important, but not the be-all and end-all in this space. Bittman (copping to some deja vu from Gartner presentations he made on server virtualization 13 years ago) noted that while virtualization has been focused on admin and ops functions, containers are focused on value for developers. But because containers are well suited for driving up VM utilization for workloads that share the same OS, we can expect to see more combinations of containers and server virtualization. Finally, Bittman underscored that Gartner doesn’t see containers having much impact on premise, but making a huge difference in the cloud. That doesn’t necessarily fit with what’s been shown in other research, such as this 2015 State of Containers Survey sponsored by VMblog.com and StackEngine, so we’ll want to watch how this plays out. This is all a lot to digest. The Gartner IT Operations Strategies & Solutions Summitacknowledges the importance of dealing with existing IT systems and practices as well as promising new technologies and thinking, and tries to point a way forward. In fact, Haight had a very good quote about microservices that I thought also served to wrap up the entire event: “If you want to run with the big dogs, you need to rethink application architecture,” he said. That can be very difficult for an enterprise to fully implement … but also very appealing. Note: Al Sargent contributed to this post. All product and company names herein may be trademarks of their registered owners. Server, tortoise and hare, business team, and cloud security images courtesy ofShutterstock.com.
June 24, 2015
by Fredric Paul
· 1,713 Views
article thumbnail
Literate Programming and GitHub
I remain captivated by the ideals of Literate Programming. My fork of PyLit (https://github.com/slott56/PyLit-3) coupled with Sphinx seems to handle LP programming in a very elegant way. It works like this. Write RST files describing the problem and the solution. This includes the actual implementation code. And everything else that's relevant. Run PyLit3 to build final Python code from the RST documentation. This should include the setup.py so that it can be installed properly. Run Sphinx to build pretty HTML pages (and LaTeX) from the RST documentation. I often include the unit tests along with the sphinx build so that I'm sure that things are working. The challenge is final presentation of the whole package. The HTML can be easy to publish, but it can't (trivially) be used to recover the code. We have to upload two separate and distinct things. (We could use BeautifulSoup to recover RST from HTML and then PyLit to rebuild the code. But that sounds crazy.) The RST is easy to publish, but hard to read and it requires a pass with PyLit to emit the code and then another pass with Sphinx to produce the HTML. A single upload doesn't work well. If we publish only the Python code we've defeated the point of literate programming. Even if we focus on the Python, we need to do a separate upload of HTML to providing the supporting documentation. After working with this for a while, I've found that it's simplest to have one source and several targets. I use RST ⇒ (.py, .html, .tex). This encourages me to write documentation first. I often fail, and have blocks of code with tiny summaries and non-existent explanations. PyLit allows one to use .py ⇒ .rst ⇒ .html, .tex. I've messed with this a bit and don't like it as much. Code first leaves the documentation as a kind of afterthought. How can we publish simply and cleanly: without separate uploads? Enter GitHub and gh-pages. See the "sphinxdoc-test" project for an example. Also thishttps://github.com/daler/sphinxdoc-test. The bulk of this is useful advice on a way to create the gh-pages branch from your RST source via Sphinx and some GitHub commands. Following this line of thinking, we almost have the case for three branches in a LP project. The "master" branch with the RST source. And nothing more. The "code" branch with the generated Python code created by PyLit. The "gh-pages" branch with the generated HTML created by Sphinx. I think I like this. We need three top-level directories. One has RST source. A build script would run PyLit to populate the (separate) directory for the code branch. The build script would also run Sphinx to populate a third top-level directory for the gh-pages branch. The downside of this shows up when you need to create a branch for a separate effort. You have a "some-major-change" branch to master. Where's the code? Where's the doco? You don't want to commit either of those derived work products until you merge the "some-major-change" back into master. GitHub Literate Programming There are many LP projects on GitHub. There are perhaps a dozen which focus on publishing with the Github-flavored Markdown as the source language. Because Markdown is about as easy to parse as RST, the tooling is simple. Because Markdown lacks semantic richness, I'm not switching. I've found that semantically rich markup is essential. This is a key feature of RST. It's carried forward by Sphinx to create very sophisticated markup. Think:code:`sample` vs. :py:func:`sample` vs. :py:mod:`sample` vs.:py:exc:`sample`. The final typesetting may be similar, but they are clearly semantically distinct and create separate index entries. A focus on Markdown seems to be a limitation. It's encouraging to see folks experiment with literate programming using Markdown and GitHub. Perhaps other folks will look at more sophisticated markup languages like RST. Previous Exercises See https://sourceforge.net/projects/stingrayreader/ for a seriously large literate programming effort. The HTML is also hosted at SourceForge: http://stingrayreader.sourceforge.net/index.html. This project is awkward because -- well -- I have to do a separate FTP upload of the finished pages after a change. It's done with a script, not a simple "git push." SourceForge has a GitHub repository. https://sourceforge.net/p/stingrayreader/code/ci/master/tree/. But. SourceForge doesn't use GitHub.com's UI, so it's not clear if it supports the gh-pages feature. I assume it doesn't, but, maybe it does. (I can't even login to SourceForge with Safari... I should really stop using SourceForge and switch to GitHub.) See https://github.com/slott56/HamCalc-2.1 for another complex, LP effort. This predates my dim understanding of the gh-pages branch, so it's got HTML (in doc/build/html), but it doesn't show it elegantly. I'm still not sure this three-branch Literate Programming approach is sensible. My first step should probably be to rearrange the PyLit3 project into this three-branch structure.
June 24, 2015
by Steven Lott
· 1,783 Views
article thumbnail
Writing a Download Server Part I: Always Stream, Never Keep Fully in Memory
Downloading various files (either text or binary) is a bread and butter of every enterprise application. PDF documents, attachments, media, executables, CSV, very large files, etc. Almost every application, sooner or later, will have to provide some form of download. Downloading is implemented in terms of HTTP, so it's important to fully embrace this protocol and take full advantage of it. Especially in Internet facing applications features like caching or user experience are worth considering. This series of articles provides a list of aspects that you might want to consider when implementing all sorts of download servers. Note that I avoid "best practices" term, these are just guidelines that I find useful but are not necessarily always applicable. One of the biggest scalability issues is loading whole file into memory before streaming it. Loading full file into byte[] to later return it e.g. from Spring MVC controller is unpredictable and doesn't scale. The amount of memory your server will consume depends linearly on number of concurrent connections times average file size - factors you don't really want to depend on so much. It's extremely easy to stream contents of a file directly from your server to the client byte-by-byte (with buffering), there are actually many techniques to achieve that. The easiest one is to copy bytes manually: @RequestMapping(method = GET) public void download(OutputStream output) throws IOException { try(final InputStream myFile = openFile()) { IOUtils.copy(myFile, output); } } Your InputStream doesn't even have to be buffered, IOUtils.copy() will take care of that. However this implementation is rather low-level and hard to unit test. Instead I suggest returning Resource: @RestController @RequestMapping("/download") public class DownloadController { private final FileStorage storage; @Autowired public DownloadController(FileStorage storage) { this.storage = storage; } @RequestMapping(method = GET, value = "/{uuid}") public Resource download(@PathVariable UUID uuid) { return storage .findFile(uuid) .map(this::prepareResponse) .orElseGet(this::notFound); } private Resource prepareResponse(FilePointer filePointer) { final InputStream inputStream = filePointer.open(); return new InputStreamResource(inputStream); } private Resource notFound() { throw new NotFoundException(); } } @ResponseStatus(value= HttpStatus.NOT_FOUND) public class NotFoundException extends RuntimeException { } Two abstractions were created to decouple Spring controller from file storage mechanism.FilePointer is a file descriptor, irrespective to where that file was taken. Currently we use one method from it: public interface FilePointer { InputStream open(); //more to come } open() allows reading the actual file, no matter where it comes from (file system, database BLOB, Amazon S3, etc.) We will gradually extend FilePointer to support more advanced features, like file size and MIME type. The process of finding and creatingFilePointers is governed by FileStorage abstraction: public interface FileStorage { Optional findFile(UUID uuid); } Streaming allows us to handle hundreds of concurrent requests without significant impact on memory and GC (only a small buffer is allocated in IOUtils). BTW I am using UUID to identify files rather than names or other form of sequence number. This makes it harder to guess individual resource names, thus more secure (obscure). More on that in next articles. Having this basic setup we can reliably serve lots of concurrent connections with minimal impact on memory. Remember that many components in Spring framework and other libraries (e.g. servlet filters) may buffer full response before returning it. Therefore it's really important to have an integration test trying to download huge file (in tens of GiB) and making sure the application doesn't crash. Writing a download server Part I: Always stream, never keep fully in memory Part II: headers: Last-Modified, ETag and If-None-Match Part III: headers: Content-length and Range Part IV: Implement HEAD operation (efficiently) Part V: Throttle download speed Part VI: Describe what you send (Content-type, et.al.) The sample application developed throughout these articles is available on GitHub.
June 24, 2015
by Tomasz Nurkiewicz DZone Core CORE
· 16,804 Views
article thumbnail
New Relic’s Docker Monitoring Now Generally Available
[This article was written by Andrew Marshall] We’ve been talking a lot about Docker over the past few weeks—with good reason. Docker’s explosive growth in popularity within the enterprise has enabled new distributed application architectures and with it a need for app-centric monitoring of your Docker containers within the context of the rest of your infrastructure. We’re thrilled to announce today that New Relic’s Docker monitoring is now generally available to New Relic customers, just in time for DockerCon 2015! (And as we noted last week, New Relic’s Docker monitoring solution has been selected by Docker for its Ecosystem Technology Partner program as a proven container monitoring solution.) Why app-centric monitoring? If you’re a software business using Docker containers, chances are you’ve done so to gain efficiencies from your system resources or portability across environments to shorten the cycle between writing and running code. Either way, adding Docker containers to your app development meant a new tier of infrastructure to monitor, which equated to a “black box” in your data—one that you had no visibility into from a monitoring perspective, Docker monitoring with New Relic is designed to “fix” this lack of monitoring visibility by adding an app-centric view of Docker containers to the existing New Relic Servers interface you already use. Now, instead of having a gap between the application and server monitoring views, we’ve added the ability to see containers with the same “first-class“ experience as you would with virtual machines and servers. You can now drill down from the application (which is really what you care about) to the individual Docker container, and then to the physical server. No more blind spots! As we strive to do with all of our products, we took the approach of “important” over “impressive” when it comes to the container information we provide to users. Based on direct feedback from customers, we’ve tried to take the mystery out of finding the right container to help you get back to developing your applications. As the way people use containers changes over time, we plan to continue to listen to our customers to help shape how we approach Docker container monitoring. Restoring 360-degree view of your application environment One example of how app-centric monitoring can impact a team moving to microservices or distributed application environments is Motus, a mobile workforce management company. Motus has been a New Relic customer for more than four years and recently has been shifting to a microservices architecture with approximately 95% of its production workload now running in Docker containers. While Docker helpd Motus gain speed and agility while reducing infrastructure complexity, the link between the application and what was happening with the container it was running on was broken. During the trial of New Relic’s Docker monitoring, Motus was able to more easily identify which container an app was running on, all the way down to the node. That was a big help when they needed to investigate an issue and determine if a new container was required.. During the beta alone, Motus estimates that using New Relic helped them to reduce the time to investigate and fix problems with its Docker containers by 30%! Motus isn’t just using New Relic to diagnose when a problem occurs. Docker monitoring with New Relic has helped Motus analyze and “right size” its containers for the application to better allocate resources for performance and budget. Get started with New Relic’s Docker monitoring today, for more information, please stop by our booth at DockerCon, June 22-23 in San Francisco! Resources: Motus Docker Monitoring Case Study Docker Monitoring with New Relic Enabling Docker Monitoring with New Relic Docker in the New Relic Community Forum
June 24, 2015
by Fredric Paul
· 900 Views
article thumbnail
R: dplyr -- Segfault Cause 'memory not mapped'
In my continued playing around with web logs in R I wanted to process the logs for a day and see what the most popular URIs were. I first read in all the lines using the read_lines function in readr and put the vector it produced into a data frame so I could process it using dplyr. library(readr) dlines = data.frame(column = read_lines("~/projects/logs/2015-06-18-22-docs")) In the previous post I showed some code to extract the URI from a log line. I extracted this code out into a function and adapted it so that I could pass in a list of values instead of a single value: extract_uri = function(log) { parts = str_extract_all(log, "\"[^\"]*\"") return(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character)) } Next I ran the following function to count the number of times each URI appeared in the logs: library(dplyr) pages_viewed = dlines %>% mutate(uri = extract_uri(column)) %>% count(uri) %>% arrange(desc(n)) This crashed my R process with the following error message: segfault cause 'memory not mapped' I narrowed it down to a problem when doing a group by operation on the ‘uri’ field and came across this post which suggested that it was handled more cleanly in more recently version of dplyr. I upgraded to 0.4.2 and tried again: ## Error in eval(expr, envir, enclos): cannot group column uri, of class 'list' That makes more sense. We’re probably returning a list from extract_uri rather than a vector which would fit nicely back into the data frame. That’s fixed easily enough by unlisting the result: extract_uri = function(log) { parts = str_extract_all(log, "\"[^\"]*\"") return(unlist(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character))) } And now when we run the count function it’s happy again, good times!
June 24, 2015
by Mark Needham
· 1,413 Views
article thumbnail
R: Regex -- Capturing Multiple Matches of the Same Group
I’ve been playing around with some web logs using R and I wanted to extract everything that existed in double quotes within a logged entry. This is an example of a log entry that I want to parse: log = '2015-06-18-22:277:548311224723746831\t2015-06-18T22:00:11\t2015-06-18T22:00:05Z\t93317114\tip-127-0-0-1\t127.0.0.5\tUser\tNotice\tneo4j.com.access.log\t127.0.0.3 - - [18/Jun/2015:22:00:11 +0000] "GET /docs/stable/query-updating.html HTTP/1.1" 304 0 "http://neo4j.com/docs/stable/cypher-introduction.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"' And I want to extract these 3 things: /docs/stable/query-updating.html http://neo4j.com/docs/stable/cypher-introduction.html Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36 i.e. the URI, the referrer and browser details. I’ll be using the stringr library which seems to work quite well for this type of work. To extract these values we need to find all the occurrences of double quotes and get the text inside those quotes. We might start by using the str_match function: > library(stringr) > str_match(log, "\"[^\"]*\"") [,1] [1,] "\"GET /docs/stable/query-updating.html HTTP/1.1\"" Unfortunately that only picked up the first occurrence of the pattern so we’ve got the URI but not the referrer or browser details. I tried str_extract with similar results before I found str_extract_all which does the job: > str_extract_all(log, "\"[^\"]*\"") [[1]] [1] "\"GET /docs/stable/query-updating.html HTTP/1.1\"" [2] "\"http://neo4j.com/docs/stable/cypher-introduction.html\"" [3] "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36\"" We still need to do a bit of cleanup to get rid of the ‘GET’ and ‘HTTP/1.1′ in the URI and the quotes in all of them: parts = str_extract_all(log, "\"[^\"]*\"")[[1]] uri = str_match(parts[1], "GET (.*) HTTP")[2] referer = str_match(parts[2], "\"(.*)\"")[2] browser = str_match(parts[3], "\"(.*)\"")[2] > uri [1] "/docs/stable/query-updating.html" > referer [1] "https://www.google.com/" > browser [1] "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" We could then go on to split out the browser string into its sub components but that’ll do for now!
June 24, 2015
by Mark Needham
· 917 Views
article thumbnail
Git for Windows, Getting Invalid Username or Password with Wincred
if you use https to communicate with your git repository, es, github or visualstudioonline, you usually setup credential manager to avoid entering credential for each command that contact the server. with latest versions of git you can configure wincred with this simple command. git config --global credential.helper wincred this morning i start getting error while i’m trying to push some commits to github. $ git push remote: invalid username or password. fatal: authentication failed for 'https://github.com/proximosrl/jarvis.documents tore.git/' if i remove credential helper (git config –global credential.helper unset) everything works, git ask me for user name and password and i’m able to do everything, but as soon as i re-enable credential helper, the error returned. this problem is probably originated by some corruption of stored credentials, and usually you can simply clear stored credentials and at the next operation you will be prompted for credentials and everything starts worked again. the question is, where are stored credential for wincred? if you use wincred for credential.helper, git is storing your credentials in standard windows credential manager you can simply open credential manager on your computer, figure 1: credential manager in your control panel settings opening credential manager you can manage windows and web credentials. now simply have a look to both web credentials and windows credentials, and delete everything related to github or the server you are using. the next time you issue a git command that requires authentication, you will be prompted for credentials again and the credentials will be stored again in the store. gian maria.
June 23, 2015
by Ricci Gian Maria
· 20,500 Views
article thumbnail
You're Invited: MuleSoft Forum in Gurgaon, Mumbai & Bangalore!!!
Don’t miss the opportunity to engage in an interactive discussion with your peers around digital transformation at MuleSoft Forum, and learn why an API-led connectivity has become IT's secret weapon. Register a seat for yourself using the following links: Gurgaon - 9th July Mumbai - 14th July Bangalore - 16th July What's in store at MuleSoft Forum: Whether you're the CIO, an Enterprise Architect, or Developer, MuleSoft Forum will provide you with a number of innovative ways to improve and transform your business. At MuleSoft Forum, you'll have the opportunity to: Learn how Mule ESB Enterprise Edition offers reliability, performance, scalability, security – all out-of-the box. See MuleSoft's connectivity platform in action in a short live demo Discover how API-Led Connectivity enables major business initiatives Attend the exclusive MuleSoft ecosystem networking reception
June 23, 2015
by Selvin Raj
· 1,247 Views · 1 Like
article thumbnail
PostgreSQL Powers All New Apps for 77% of the Database's Users
Survey of open source PostgreSQL users found adoption continues to rise with 55% of users deploying it for mission-critical applications Bedford, MA – June 23, 2015 – EnterpriseDB (EDB), the leading provider of enterprise-class Postgres products and database compatibility solutions, today announced the results of its “PostgreSQL Adoption Survey 2015,” a biennial survey of open source PostgreSQL users. Conducted by EnterpriseDB, the survey found PostgreSQL adoption continuing to rise, with 55% of users – up from 40% two years ago – deploying it for mission-critical applications and 77% of users are dedicating all new application deployments to PostgreSQL. These findings give voice to end users and confirm such industry indicators as increasing job listings and monthly rankings on DB-Engines that have pointed to rising interest in and demand for PostgreSQL, also called Postgres. The growing popularity of Postgres also comes as traditional software vendors suffer setbacks in the marketplace. The enterprise-class performance, security and stability of Postgres, on par with traditional database vendors for most corporate workloads, meanwhile have helped position Postgres among the solutions from the world’s largest vendors. The opportunity to transform their data center economics has helped fuel downloads of Postgres as well. End users reported cutting costs with Postgres, with 41% reporting they had first-year cost savings of 50% or more. They’re using Postgres to build web 2.0 applications using unstructured data as evidenced by the 64% of respondents who said they were working with JSON/JSONB and the 47% who said they were using Postgres for collaboration applications. “Postgres is empowering organizations to transform the economics of IT. IT can invest in the customer engagement applications that differentiate their operations from their competition instead of continuing to pay the steep and rising licensing and support fees charged by traditional database vendors,” said Marc Linster, senior vice president of products and services of EnterpriseDB. “With the expanding adoption, EnterpriseDB has experienced dramatic growth year over year, providing the software, services and support that organizations need to be successful with Postgres.” Database Migrations, Replacements The findings also support statements in a recent Gartner report that reflect the widespread acceptance of open source databases. “By 2018, more than 70% of new in-house applications will be developed on an OSDBMS, and 50% of existing commercial RDBMS instances will have been converted or will be in process,” according to the April 2015 Gartner report, The State of Open-Source RDBMs, 2015.* Among Postgres users, the survey findings show migrations are already under way with 37% reporting they had migrated applications from Oracle or Microsoft SQL Server to Postgres. Many users were still planning further migrations, with 37% of PostgreSQL users saying they will gradually replace their legacy systems with Postgres, compared to 29% who said that in the 2013 survey. Further, end users predict their deployments of Postgres will expand significantly, with 32% saying they anticipate production deployments of Postgres to increase by at least 50% over the next year. The survey, conducted by EnterpriseDB using an online tool in May 2015, queried registered users of PostgreSQL and drew 274 respondents worldwide from government organizations and companies ranging in size and industry. *The State of Open-Source RDBMs, 2015, by Donald Feinberg and Merv Adrian, published on April 21, 2015. Connect with EnterpriseDB Read the blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb Become a fan on Facebook: http://www.facebook.com/EnterpriseDB?ref=ts Join us on Google+: https://plus.google.com/108046988421677398468 Connect on LinkedIn: http://www.linkedin.com/company/enterprisedb
June 23, 2015
by Fran Cator
· 813 Views
article thumbnail
It's Time to Start Programming (for) Adults
This week we're in Boston at DevNation, an awesome, young (second ever), and relatively intimate (~500 attendees) conference on anything and everything hard-core, cool-and-hot (DevOps, big data, Angular, IoT, you name it), and of course -- since the conference is organized by Red Hat -- totally open-source. So far I've had in-depth conversations with five super-amazing engineers, attended several inspiring keynotes, and chatted with one skilled developer after another. We'll transcribe the deeper interviews shortly, including some on topics totally unrelated to this post. But meanwhile I'd like to offer some thoughts inspired by the first day of the event. The general theme is: we're just beginning to get serious about separation of concerns. The metaphor that keeps popping into my head comes from the first keynote: machines have finally grown up. Imperatives: telling really unintelligent agents what to do (and then they sort of do whatever they please) It is trivial to observe that computers are incredibly stupid. Turing's fundamental paper is about how to figure out whether a theoretical computer will keep calculating the values of a function until the heat death of the universe (okay that's a slight oversimplification). The fact that Edsger Dijkstra felt the need to rail gently against all goto statements in any higher-level language than machine code suggests that, in 1968, far too many computers needed instructions about how to read the instructions that tell them what to do in the first place. Richard Feynman's famous lecture on computer heuristics is the condescension of the man who conceived quantum computing to the level of functional composition (hmmm) and file systems (double sigh). Stupid agents need to be told exactly what to do. Then they need to be told to pay attention to the exact part of the command that tells them exactly what they have been told to do (dude, just goto line 1343 already and shut up). Then they don't do what you told them (optimistically we call this an 'exception'), and then you send them into time out / set a break point and try to figure out where the idiot state muted off the rails. They stare blankly at the wall / variable / register and either do nothing or repeat another unintelligibly wrong result until you notice that your increment is (apparently meaninglessly to you) one bracket too deep. You sigh and tell them what to do again, and after a while they hit age thirty (life-years/debug-hours) and maybe do something useful with their (process-)lives. Well, maybe I'm straining the metaphor a little here, but you get the point because it cuts too close to home. We spend far too much time fixing stupid mistakes that we didn't even know we were making because -- like all actual human beings -- we assumed that the agent we commanded will use their common sense to iron out those few whiffs of, admit it, frank nonsense that our step-by-step instructions will probably always contain. So, at least, goes the imperative programming paradigm. The machine does what you tell it to; and the universe collapses onto itself before the last real number is computed. Functions: reliable, predictable adults Time to give credit where it's due: I'm really just riffing on the metaphor Venkat Subramanian offered in his highly enjoyable keynote on The Joy of Functional Programming yesterday morning His not-so-smart agents -- the 'programmed' of imperative programming -- were toddlers. Since I don't have any kids, I can't presume to understand this experience fully (although I did grow up with three younger brothers..). But the general idea is: imperative programming is tricky because, when you spell everything out super literally, it's very hard to tell exactly why what you thought should happen didn't. Venkat's talk was a whirlwind of functional concepts, from the thrill of immutability to the self-evident utility of memoization. For random (Myers-Briggs?) reasons, the object-oriented paradigm never seemed very intuitive to me -- I've gravitated towards functional style even when the problem domain wasn't actually modeled very well by functions -- but Venkat's side-by-side implementations of simple calculations in OO and functional Java showed the readability delta very clearly. Functional code is beautiful because it looks like its purpose. It tells you flat-out: here is what I do; and then it does it. But immutable functions are also beautiful because they do exactly the same thing every time. I couldn't count on my two year old brother very much at all because given a certain input I had pretty much no idea what would come out. But we all count on our grown-up collaborators to output exactly what they should, given a definite input, predictably and reliably every time. Of course, people also do more than expected -- every intervention of intelligence is an injection of creativity, not generated by the definition of the function -- but at least they do what you need them to do and no less. Containers: grown-ups with good boundaries I'm picking out just one aspect of the resurgent 'joy' of functional programming because the renaissance of containerization (another 'old' technology that is just now really taking off) is, I think, a part of the same shift toward, let's say, treating computers as adults. If functions are reliable agents, then applications in well-defined containers are self-sufficient agents who know exactly what they need from others and neither require nor demand anything more. If apps on dedicated VMs are teenagers negotiating personal boundaries by waking/booting up independently (and taking far too long -- and far too many resources -- to do so, given their meager output) -- or bubble boys, isolated in ways that are unfortunate in order to isolate in ways that are absolutely necessary -- then containerized applications are subway-riders who jam into the train without offending anyone or campers who can live anywhere with just a backpack of just the stuff they need. Of course, subway-riders and campers do more than just not-mess-up. But what's kind of neat about containers is that -- like an adult with good boundaries -- clearly defined bounds and interfaces free up the application / mind to do whatever world-changing thing the developer / human has cooked up. I'll come back to this metaphor in a later article. (Mesh networks, SDN, and ad-hoc computing are all part of the same picture, I think. Kubernetes probably is too, along with event-driven and reactive programming, the actor model, dreams of Smalltalk, and of course REST, at least of the HATEOAS flavor.) But maybe this isn't a good way to think about some of these recent sparks in devworld within a single paradigm -- and maybe my perpetual discomfort with OO is influencing me too much. What do you think?
June 23, 2015
by John Esposito
· 1,702 Views · 1 Like
article thumbnail
7 principles for good intranet governance
An effective governance framework is essential for a well-managed intranet. It can be the deciding factor between a good user experience, greatly valued, and a poor user experience with little benefit. Every intranet is different depending on the size, type, and culture of the organisation it supports. However, there are some key governance principles that are common to their success. Recently I spoke at Intranatverk about this based on my book ‘Digital success or digital disaster?‘ which is a practical, experience-based approach to growing and managing a successful intranet. My slides ‘7 principles of good intranet governance’ are avilable for you to share. The alternative to governance can be chaotic anarchy. Posing risks to security and intellectual property provides an awful experience for those who still use your intranet. Where governance can start to get confusing and difficult is in how it is applied. Applying these governance principles leads to a good outcome: Know your organisation Define the scope Put people first Use all resources Compare and benchmark Do what you say you will do Keep it legal Think about how you build a house with the foundations, walls, floors, windows, doors and finally the roof. It would not make sense for you to have windows, doors, and a roof only. The same applies to your governance framework. These principles for good governance are not like a menu that you choose which items to have and leave others alone. You need to follow all of these to build a strong foundation to improve your intranet and implement your strategy. Read the introductory chapter of my new governance book to find out more. A license to share the ebook within your whole organisation is also available.
June 23, 2015
by Mark Morrell
· 933 Views
article thumbnail
Lucene SIMD Codec Benchmark and Future Steps
We are happy to share results of our Lucene SIMD research announced earlier. Ivan integrated https://github.com/lemire/simdcomp as Lucene Codec and we could observe 18% gain on standard Lucene benchmark. Here are the fork, deck, recording from BerlinBuzzwords. Tech notes The prototype is limited to postings (IndexOptions.DOCS), so far it doesn’t support freqs, positions, payloads. Thus, full idf-tf scoring is not possible so far. The heap problem Currently, the bottleneck of the search performance is the scoring heap. Heap is hard for vectorization, and even hard to compute with regular instructions. Thus, benchmark retrieves only top 10 docs to limit efforts for managing heap. Here is a profiler snapshot for the default Lucene code, decoding takes more than collecting. This is hotspots with the SIMD codec, note that collecting is prevailing now and ForUtil takes relatively smaller time for decoding. Edge cases There are few special code paths which bypass generic FOR decoding which make it harder to observe vectorization gain. Very dense stopwords postings are encoded as a sequence of increasing numbers with by just specifying length of the sequence (see ForUtil.ALL_VALUES_EQUAL). Thus, we excluded stopwords from the benchmark to better observe the gain in FOR decoding. Another edge case is shortening postings on high segmentation. FOR compression is applied on blocks, and remaining tail is encoded by vInt. Thus, to observe the gain in FOR decoding, we merge segments to the single one. Due to the same reason, rare terms with short postings list is not a good use case to show a gain. Further Plans Here are some directions which we consider: provide codec and benchmark as a separate modules; apply SIMD codec for DocValues and Norms - it should improve generic sorting, scoring and faceting. Because ordinals in DocValues are not increasing like postings, https://github.com/lemire/FastPFor should be incorporated; complete codec for supporting frequencies, offsets and positions to make it fully functional; presumably, SIMD facet component might get some gain from vectorization, however decoding ordinals might not be the biggest problem in faceting, like it’s described here; execute binary operations like intersections on compressed data with SIMD instructions https://github.com/lemire/SIMDCompressionAndIntersection; native code might access mmapped index files without boundary checks or copying to heap arrays; implementing roaring bitmaps might help with dense postings; Which of of those directions are relevant your challenges? Leave a comment below! Here are still questions to clarify: will critical natives work for Java 9 and further? couldn’t it happen that vectorization heuristic by JIT makes explicit SIMD codec redundant? We’d like to thank all people who contributed their researches and let us to conduct ours.
June 23, 2015
by Mikhail Khludnev
· 1,780 Views
article thumbnail
Neo4j: The Foul Revenge Graph
Last week I was showing the foul graph to my colleague Alistair who came up with the idea of running a ‘foul revenge’ query to find out which players gained revenge for a foul with one of their own later in them match. Queries like this are very path centric and therefore work well in a graph. To recap, this is what the foul graph looks like: The first thing that we need to do is connect the fouls in a linked list based on time so that we can query their order more easily. We can do this with the following query: MATCH (foul:Foul)-[:COMMITTED_IN_MATCH]->(match) WITH foul,match ORDER BY match.id, foul.sortableTime WITH match, COLLECT(foul) AS fouls FOREACH(i in range(0, length(fouls) -2) | FOREACH(foul1 in [fouls[i]] | FOREACH (foul2 in [fouls[i+1]] | MERGE (foul1)-[:NEXT]->(foul2) ))); This query collects fouls grouped by match and then adds a ‘NEXT’ relationship between adjacent fouls. The graph now looks like this: Now let’s find the revenge foulers in the Bayern Munich vs Barcelona match. We’re looking for the following pattern: This translates to the following cypher query: match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1), (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2), (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2) WHERE (foul1)-[:NEXT*]->(foul2) RETURN player2.name AS firstFouler, player1.name AS revengeFouler, foul1.time, foul1.location, foul2.time, foul2.location I’ve added in a few extra parts to the pattern to pull out the players involved and to find the revenge foulers in a specific match – the Bayern Munich vs Barcelona Semi Final 2nd leg. We end up with the following revenge fouls: We can see here that Dani Alves actually gains revenge on Bastian Schweinsteiger twice for a foul he made in the 10th minute. If we tweak the query to the following we can get a visual representation of the revenge fouls as well: match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1), (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2), (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2), (foul1)-[:NEXT*]->(foul2) RETURN * At the moment I’ve restricted the revenge concept to single matches but I wonder whether it’d be more interesting to create a linked list of fouls which crosses matches between teams in the same season. The code for all of this is on github – the README is a bit sketchy at the moment but I’ll be fixing that up soon.
June 23, 2015
by Mark Needham
· 910 Views
article thumbnail
Spring Data Couchbase: Handle Unknown Class
Spring Data Couchbase provides transparent way to save and load Java classes to and from Couchbase. However, if a loaded class contains a property of unknown class, you will receive org.springframework.data.mapping.model.MappingException: No mapping metadata found for java.lang.Object This may happen if, for example, different versions of your code save and load information. In order to handle situation when we want to load an object, which contains another object on unknown class (in a map or list property) we should override the default SPMappingCouchbaseConverter. Let's see how we do this with Spring XML configuration: I replace my old XML: to the following XML: And create the following class: public class MyMappingCouchbaseConverter extends MappingCouchbaseConverter { public MyMappingCouchbaseConverter(final MappingContext, CouchbasePersistentProperty> mappingContext) { super(mappingContext); } @Override protected R read(final TypeInformation type, final CouchbaseDocument source, final Object parent) { if (Object.class == typeMapper.readType(source, type).type) { return null; } return super.read(type, source, parent); } } Now, if loaded object will contain a property of unknown class or an object of unknown class in a list or map, this property or object will be replaced by null. view source print?
June 22, 2015
by Pavel Bernshtam
· 3,861 Views
  • Previous
  • ...
  • 732
  • 733
  • 734
  • 735
  • 736
  • 737
  • 738
  • 739
  • 740
  • 741
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: