DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Mike Hadlow

DZone MVB at Suteki Ltd

Brighton, GB

Joined Nov 2008

About

Mike Hadlow is a Brighton, UK based developer, blogger and author of a number of open source frameworks and applications.

Stats

Reputation: 151
Pageviews: 700.9K
Articles: 19
Comments: 0
  • Articles

Articles

article thumbnail
The Possibilities of Web MIDI With TypeScript
With this new API you can quickly create a browser-based MIDI player for yourself using TypeScript. Read on to get started making music with code!
September 6, 2018
· 9,514 Views · 2 Likes
article thumbnail
A Docker ‘Hello World' With Mono
Docker is a lightweight virtualization technology for Linux that promises to revolutionize the deployment and management of distributed applications. Rather than requiring a complete operating system, like a traditional virtual machine, Docker is built on top of Linux containers, a feature of the Linux kernel, that allows light-weight Docker containers to share a common kernel while isolating applications and their dependencies. There’s a very good Docker SlideShare presentation here that explains the philosophy behind Docker using the analogy of standardized shipping containers. Interesting that the standard shipping container has done more to create our global economy than all the free-trade treaties and international agreements put together. A Docker image is built from a script, called a ‘Dockerfile’. Each Dockerfile starts by declaring a parent image. This is very cool, because it means that you can build up your infrastructure from a layer of images, starting with general, platform images and then layering successively more application specific images on top. I’m going to demonstrate this by first building an image that provides a Mono development environment, and then creating a simple ‘Hello World’ console application image that runs on top of it. Because the Dockerfiles are simple text files, you can keep them under source control and version your environment and dependencies alongside the actual source code of your software. This is a game changer for the deployment and management of distributed systems. Imagine developing an upgrade to your software that includes new versions of its dependencies, including pieces that we’ve traditionally considered the realm of the environment, and not something that you would normally put in your source repository, like the Mono version that the software runs on for example. You can script all these changes in your Dockerfile, test the new container on your local machine, then simply move the image to test and then production. The possibilities for vastly simplified deployment workflows are obvious. Docker brings concerns that were previously the responsibility of an organization’s operations department and makes them a first class part of the software development lifecycle. Now your infrastructure can be maintained as source code, built as part of your CI cycle and continuously deployed, just like the software that runs inside it. Docker also provides docker index, an online repository of docker images. Anyone can create an image and add it to the index and there are already images for almost any piece of infrastructure you can imagine. Say you want to use RabbitMQ, all you have to do is grab a handy RabbitMQ images such as https://index.docker.io/u/tutum/rabbitmq/ and run it like this: docker run -d -p 5672:5672 -p 55672:55672 tutum/rabbitmq The –p flag maps ports between the image and the host. Let’s look at an example. I’m going to show you how to create a docker image for the Mono development environment and have it built and hosted on the docker index. Then I’m going to build a local docker image for a simple ‘hello world’ console application that I can run on my Ubuntu box. First we need to create a Docker file for our Mono environment. I’m going to use the Mono debian packages from directhex. These are maintained by the official Debian/Ubuntu Mono team and are the recommended way of installing the latest Mono versions on Ubuntu. Here’s the Dockerfile: #DOCKER-VERSION 0.9.1 # #VERSION 0.1 # # monoxide mono-devel package on Ubuntu 13.10 FROM ubuntu:13.10 MAINTAINER Mike Hadlow RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q software-properties-common RUN sudo add-apt-repository ppa:directhex/monoxide -y RUN sudo apt-get update RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q mono-devel Notice the first line (after the comments) that reads, ‘FROM ubuntu:13.10’. This specifies the parent image for this Dockerfile. This is the official docker Ubuntu image from the index. When I build this Dockerfile, that image will be automatically downloaded and used as the starting point for my image. But I don’t want to build this image locally. Docker provide a build server linked to the docker index. All you have to do is create a public GitHub repository containing your dockerfile, then link the repository to your profile on docker index. You can read the documentation for the details. The GitHub repository for my Mono image is at https://github.com/mikehadlow/ubuntu-monoxide-mono-devel. Notice how the Docker file is in the root of the repository. That’s the default location, but you can have multiple files in sub-directories if you want to support many images from a single repository. Now any time I push a change of my Dockerfile to GitHub, the docker build system will automatically build the image and update the docker index. You can see image listed here:https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/ I can now grab my image and run it interactively like this: $ sudo docker pull mikehadlow/ubuntu-monoxide-mono-devel Pulling repository mikehadlow/ubuntu-monoxide-mono-devel f259e029fcdd: Download complete 511136ea3c5a: Download complete 1c7f181e78b9: Download complete 9f676bd305a4: Download complete ce647670fde1: Download complete d6c54574173f: Download complete 6bcad8583de3: Download complete e82d34a742ff: Download complete $ sudo docker run -i mikehadlow/ubuntu-monoxide-mono-devel /bin/bash mono --version Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-1~pre1) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen exit Next let’s create a new local Dockerfile that compiles a simple ‘hello world’ program, and then runs it when we run the image. You can follow along with these steps. All you need is a Ubuntu machine with Docker installed. First here’s our ‘hello world’, save this code in a file named hello.cs: using System; namespace Mike.MonoTest { public class Program { public static void Main() { Console.WriteLine("Hello World"); } } } Next we’ll create our Dockerfile. Copy this code into a file called ‘Dockerfile’: #DOCKER-VERSION 0.9.1 FROM mikehadlow/ubuntu-monoxide-mono-devel ADD . /src RUN mcs /src/hello.cs CMD ["mono", "/src/hello.exe"] Once again, notice the ‘FROM’ line. This time we’re telling Docker to start with our mono image. The next line ‘ADD . /src’, tells Docker to copy the contents of the current directory (the one containing our Dockerfile) into a root directory named ‘src’ in the container. Now our hello.cs file is at /src/hello.cs in the container, so we can compile it with the mono C# compiler, mcs, which is the line ‘RUN mcs /src/hello.cs’. Now we will have the executable, hello.exe, in the src directory. The line ‘CMD [“mono”, “/src/hello.exe”]’ tells Docker what we want to happen when the container is run: just execute our hello.exe program. As an aside, this exercise highlights some questions around what best practice should be with Docker. We could have done this in several different ways. Should we build our software independently of the Docker build in some CI environment, or does it make sense to do it this way, with the Docker build as a step in our CI process? Do we want to rebuild our container for every commit to our software, or do we want the running container to pull the latest from our build output? Initially I’m quite attracted to the idea of building the image as part of the CI but I expect that we’ll have to wait a while for best practice to evolve. Anyway, for now let’s manually build our image: $ sudo docker build -t hello . Uploading context 1.684 MB Uploading context Step 0 : FROM mikehadlow/ubuntu-monoxide-mono-devel ---> f259e029fcdd Step 1 : ADD . /src ---> 6075dee41003 Step 2 : RUN mcs /src/hello.cs ---> Running in 60a3582ab6a3 ---> 0e102c1e4f26 Step 3 : CMD ["mono", "/src/hello.exe"] ---> Running in 3f75e540219a ---> 1150949428b2 Successfully built 1150949428b2 Removing intermediate container 88d2d28f12ab Removing intermediate container 60a3582ab6a3 Removing intermediate container 3f75e540219a You can see Docker executing each build step in turn and storing the intermediate result until the final image is created. Because we used the tag (-t) option and named our image ‘hello’, we can see it when we list all the docker images: $ sudo docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE hello latest 1150949428b2 10 seconds ago 396.4 MB mikehadlow/ubuntu-monoxide-mono-devel latest f259e029fcdd 24 hours ago 394.7 MB ubuntu 13.10 9f676bd305a4 8 weeks ago 178 MB ubuntu saucy 9f676bd305a4 8 weeks ago 178 MB ... Now let’s run our image. The first time we do this Docker will create a container and run it. Each subsequent run will reuse that container: $ sudo docker run hello Hello World And that’s it. Imagine that instead of our little hello.exe, this image contained our web application, or maybe a service in some distributed software. In order to deploy it, we’d simply ask Docker to run it on any server we like; development, test, production, or on many servers in a web farm. This is an incredibly powerful way of doing consistent repeatable deployments. To reiterate, I think Docker is a game changer for large server side software. It’s one of the most exciting developments to have emerged this year and definitely worth your time to check out.
April 3, 2014
· 10,720 Views
article thumbnail
Docker: Bulk Remove Images and Containers
I’ve just started looking at Docker. It’s a cool new technology that has the potential to make the management and deployment of distributed applications a great deal easier. I’d very much recommend checking it out. I’m especially interested in using it to deploy Mono applications because it promises to remove the hassle of deploying and maintaining the mono runtime on a multitude of Linux servers. I’ve been playing around creating new images and containers and debugging my Dockerfile, and I’ve wound up with lots of temporary containers and images. It’s really tedious repeatedly running ‘docker rm’ and ‘docker rmi’, so I’ve knocked up a couple of bash commands to bulk delete images and containers. Delete all containers: sudo docker ps -a -q | xargs -n 1 -I {} sudo docker rm {} Delete all un-tagged (or intermediate) images: sudo docker rmi $( sudo docker images | grep '' | tr -s ' ' | cut -d ' ' -f 3)
April 2, 2014
· 14,056 Views
article thumbnail
How To Add Images To A GitHub Wiki
Every GitHub repository comes with its own wiki. This is a great place to put the documentation for your project. What isn’t clear from the wiki documentation is how to add images to your wiki. Here’s my step-by-step guide. I’m going to add a logo to the main page of my WikiDemo repository’s wiki: https://github.com/mikehadlow/WikiDemo/wiki/Main-Page First clone the wiki. You grab the clone URL from the button at the top of the wiki page. $ git clone git@github.com:mikehadlow/WikiDemo.wiki.git Cloning into 'WikiDemo.wiki'... Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa': remote: Counting objects: 6, done. remote: Compressing objects: 100% (3/3), done. remote: Total 6 (delta 0), reused 0 (delta 0) Receiving objects: 100% (6/6), done. Create a new directory called ‘images’ (it doesn’t matter what you call it, this is just a convention I use): $ mkdir images Then copy your picture(s) into the images directory (I’ve copied my logo_design.png file to my images directory). $ ls -l -rwxr-xr-x 1 mike.hadlow Domain Users 12971 Sep 5 2013 logo_design.png Commit your changes and push back to GitHub: $ git add -A $ git status # On branch master # Changes to be committed: # (use "git reset HEAD ..." to unstage) # # new file: images/logo_design.png # $ git commit -m "Added logo_design.png" [master 23a1b4a] Added logo_design.png 1 files changed, 0 insertions(+), 0 deletions(-) create mode 100755 images/logo_design.png $ git push Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa': Counting objects: 5, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 9.05 KiB, done. Total 4 (delta 0), reused 0 (delta 0) To git@github.com:mikehadlow/WikiDemo.wiki.git 333a516..23a1b4a master -> master Now we can put a link to our image in ‘Main Page’: Save and there’s your image for all to see:
March 27, 2014
· 24,442 Views · 1 Like
article thumbnail
EasyNetQ: Multiple Handlers per Consumer
A common feature request for EasyNetQ has been to have some way of implementing a command pipeline pattern. Say you’ve got a component that is emitting commands. In a .NET application each command would most probably be implemented as a separate class. A command might look something like this: public class AddUser { public string Username { get; private set; } public string Email { get; private set; } public AddUser(string username, string email) { Username = username; Email = email; } } Another component might listen for commands and act on them. Previously in EasyNetQ it would have been difficult to implement this pattern because a consumer (Subscriber) was always bound to a given message type. You would have had to use the lower level IAdvancedBus binary message methods and implement your own serialization and dispatch infrastructure. But now EasyNetQ comes with multiple handlers per consumer out of the box. From version 0.20 there’s a new overload of the Consume method that provides a fluent way for you to register multiple message type handlers to a single consumer, and thus a single queue. Here’s an example: bus = RabbitHutch.CreateBus("host=localhost"); var queue = bus.Advanced.QueueDeclare("multiple_types"); bus.Advanced.Consume(queue, x => x .Add((message, info) => { Console.WriteLine("Add User {0}", message.Body.Username); }) .Add((message, info) => { Console.WriteLine("Delete User {0}", message.Body.Username); }) ); Now we can publish multiple message types to the same queue: bus.Advanced.Publish(Exchange.GetDefault(), queue.Name, false, false, new Message(new AddUser("Steve Howe", "steve@worlds-best-guitarist.com")))); bus.Advanced.Publish(Exchange.GetDefault(), queue.Name, false, false, new Message(new DeleteUser("Steve Howe"))); By Default, if a matching handler cannot be found for a message, EasyNetQ will throw an exception. You can change this behaviour, and simply ignore messages that do not have a handler, by setting the ThrowOnNoMatchingHandler property to false, like this: bus.Advanced.Consume(queue, x => x .Add((message, info) => { Console.WriteLine("Add User {0}", message.Body.Username); }) .Add((message, info) => { Console.WriteLine("Delete User {0}", message.Body.Username); }) .ThrowOnNoMatchingHandler = false ); Very soon there will be a send/receive pattern implemented at the IBus level to make this even easier. Watch this space! Happy commanding!
December 1, 2013
· 6,592 Views
article thumbnail
EasyNetQ: Consumer Cancellation
Consumer cancellation has been a requested feature of EasyNetQ for a while now. I wasn’t intending to implement it immediately, but a pull request by Daniel White today made me look at the whole issue of consumer cancellation from the point of view of a deleted queue. It led rapidly to a quite straightforward implementation of user cancellations. The wonders of open source software development, and the generosity of people like Daniel, never fails to impress me. So what is consumer cancellation? It means that you can stop consuming from a queue without having dispose of the entire IBus instance and close the connection. All the IBus Subscribe methods, and the IAdvancedBus Consume methods now return an IDisposable. To stop consuming, just call Dispose like this: var cancelSubscription = bus.Subscribe("subscriptionId", MessageHandler); . . // sometime later stop consuming cancelSubscription.Dispose(); Nice :)
November 16, 2013
· 4,985 Views
article thumbnail
EasyNetQ: Big Breaking Changes to Request-Response
My intensive work on EasyNetQ (our super simple .NET API for RabbitMQ) continues. I’ve been taking lessons learned from nearly two years of production and the fantastic feedback from EasyNetQ’s users, mashing this together, and making lots changes to both the internals and the API. I know that API changes cause problems for users; they break your application and force you to revisit your code. But the longer term benefits should outweigh the immediate costs as EasyNetQ slowly morphs into a solid, reliable library that really does make working with RabbitMQ as easy as possible. Changes in version 0.17 are all around the request-response pattern. The initial implementation was very rough with lots of nasty ways that resource use could run away when things went wrong. The lack of timeouts also meant that your application could wait forever when messages got lost. Lastly the API was quite clunky, with call-backs where Tasks are a far better choice. All these problems have been corrected in this version. API changes There is now a synchronous Request method. Of course messaging is by nature a asynchronous operation, but sometimes you just want the simplest possible thing and you don’t care about blocking your thread while you wait for a response. Here’s what it looks like: var response = bus.Request(request); The old call-back Request method has been removed. There was no need for it when the RequestAsync that returned a Task was always a better choice: var task = bus.RequestAsync(request) task.ContinueWith(response => { Console.WriteLine("Got response: '{0}'", response.Result.Text); }); Timeouts Timeouts are an essential ingredient of any distributed system. This probably deserves a blog post of its own, but no matter how resilient you make your architecture, if an important piece simply goes away (like the network for example), you need a circuit breaker. EasyNetQ now has a global timeout that you can configure via the connection string: var bus = RabbitHutch.CreateBus("host=localhost;timeout=60"); Here we’ve configured the timeout as 60 seconds. The default is 10 seconds. If you make a request, but no response is received within the timeout period, a System.Timeout exception will be thrown. If the connection goes away while a request is in-flight, EasyNetQ doesn’t wait for the timeout to fire, but immediately throws an EasyNetQException with a message saying that the connection has been lost. Your application should catch both Timeout and EasyNetQ exceptions and react appropriately. Internal Changes My last blog post was a discussion of the implementation options of request-response with RabbitMQ. As I said there, I now believe that a single exclusive queue for all responses to a client is the best option. Version 0.17 implements this. When you call bus.Request(…) you will see a queue created named easynetq.response.. This will last for the lifetime of the current connection. Happy requesting!
November 4, 2013
· 10,089 Views
article thumbnail
EasyNetQ: Publisher Confirms
Publisher confirms are a RabbitMQ addition to AMQP to guarantee message delivery. You can read all about them here and here. In short they provide a asynchronous confirmation that a publish has successfully reached all the queues that it was routed to. To turn on publisher confirms with EasyNetQ set the publisherConfirms connection string parameter like this: var bus = RabbitHutch.CreateBus("host=localhost;publisherConfirms=true"); When you set this flag, EasyNetQ will wait for the confirmation, or a timeout, before returning from the Publish method: bus.Publish(new MyMessage { Text = "Hello World!" }); // here the publish has been confirmed. Nice and easy. There’s a problem though. If I run the above code in a while loop without publisher confirms, I can publish around 4000 messages per second, but with publisher confirms switched on that drops to around 140 per second. Not so good. With EasyNetQ 0.15 we introduced a new PublishAsync method that returns a Task. The Task completes when the publish is confirmed: bus.PublishAsync(message).ContinueWith(task => { if (task.IsCompleted) { Console.WriteLine("Publish completed fine."); } if (task.IsFaulted) { Console.WriteLine(task.Exception); } }); Using this code in a while loop gets us back to 4000 messages per second with publisher confirms on. Happy confirms!
November 1, 2013
· 8,134 Views
article thumbnail
RabbitMQ Request-Response Pattern
If you are programming against a web service, the natural pattern is request-response. It’s always initiated by the client, which then waits for a response from the server. It’s great if the client wants to send some information to a server, or request some information based on some criteria. It’s not so useful if the server wants to initiate the send of some information to the client. There we have to rely on somewhat extended HTTP tricks like long-polling or web-hooks. With messaging systems, the natural pattern is send-receive. A producer node publishes a message which is then passed to a consuming node. There is no real concept of client or server; a node can be a producer, a consumer, or both. This works very well when one node wants to send some information to another or vice-versa, but isn’t so useful if one node wants to request information from another based on some criteria. All is not lost though. We can model request-response by having the client node create a reply queue for the response to a query message it sends to the server. The client can set the request message properties’ reply_to field with the reply queue name. The server inspects the reply_to field and publishes the reply to the reply queue via the default exchange, which is then consumed by the client. The implementation is simple on the request side; it looks just like a standard send-receive. But on the reply side, we have some choices to make. If you Google for ‘RabbitMQ RPC’, or ‘RabbitMQ request response’, you will find several different opinions concerning the nature of the reply queue. Should there be a reply queue per request, or should the client maintain a single reply queue for multiple requests? Should the reply queue be exclusive, only available to this channel, or not? Note that an exclusive queue will be deleted when the channel is closed, either intentionally, or if there is a network or broker failure that causes the connection to be lost. Let’s have a look at the pros and cons of these choices. Exclusive Reply Queue Per Request. Here each request creates a reply queue. The benefits are that it is simple to implement. There is no problem with correlating the response with the request, since each request has its own response consumer. If the connection between the client and the broker fails before a response is received, the broker will dispose of any remaining reply queues and the response message will be lost. The main implementation issue is that we need to clean up any replies queues in the event that a problem with the server means that it never publishes the response. This pattern has a performance cost because a new queue and consumer has to be created for each request. Exclusive Reply Queue Per Client Here each client connection maintains a reply queue which many requests can share. This avoids the performance cost of creating a queue and consumer per request, but adds the overhead that the client needs to keep track of the reply queue and match up responses with their respective requests. The standard way of doing this is with a correlation id that is copied by the server from the request to the response. Once again, there is no problem with deleting the reply queue when the client disconnects because the broker will do this automatically. It does mean that any responses that are in-flight at the time of a disconnection will be lost. Durable Reply Queue Both the options above have the problem that the response message can be lost if the connection between the client and broker goes down while the response is in flight. This is because they use exclusive queues that are deleted by the broker when the connection that owns them is closed. The natural answer to this is to use a non-exclusive reply queue. However this creates some management overhead. You need some way to name the reply queue and associate it with a particular client. The problem is that it’s difficult for the client to know if any one reply queue belongs to itself, or to another instance. It’s easy to naively create a situation where responses are being delivered to the wrong instance of the client. You will probably wind up manually creating and naming response queues, which removes one of the main benefits of choosing broker based messaging in the first place. EasyNetQ For a high-level re-useable library like EasyNetQ the durable reply queue option is out of the question. There is no sensible way of knowing whether a particular instance of the library belongs to a single logical instance of a client application. By ‘logical instance’ I mean an instance that might have been stopped and started, as opposed to two separate instances of the same client. Instead we have to use exclusive queues and accept the occasional loss of response messages. It is essential to implement a timeout, so that an exception can be raised to the client application in the event of response loss. Ideally the client will catch the exception and re-try the message if appropriate. Currently EasyNetQ implements the ‘reply queue per request’ pattern, but I’m planning to change it to a ‘reply queue per client’. The overhead of matching up responses to requests is not too onerous, and it is both more efficient and easier to manage. I’d be very interested in hearing other people’s experiences in implementing request-response with RabbitMQ.
October 31, 2013
· 40,088 Views · 1 Like
article thumbnail
EasyNetQ: Big Breaking Changes in the Advanced Bus
EasyNetQ is my little, easy to use, client API for RabbitMQ. It’s been doing really well recently. As I write this, it has 24,653 downloads on NuGet, making it by far the most popular high-level RabbitMQ API. The goal of EasyNetQ is to make working with RabbitMQ as easy as possible. I wanted junior developers to be able to use basic messaging patterns out-of-the-box with just a few lines of code and have EasyNetQ do all the heavy lifting: exchange-binding-queue configuration, error management, connection management, serialization, thread handling; all the things that make working against the low level AMQP C# API, provided by RabbitMQ, such a steep learning curve. To meet this goal, EasyNetQ has to be a very opinionated library. It has a set way of configuring exchanges, bindings and queues based on the .NET type of your messages. However, right from the first release, many users said that they liked the connection management, thread handling, and error management, but wanted to be able to set up their own broker topology. To support this, we introduced the advanced API, an idea stolen shamelessly from Ayende’s RavenDB client. You access the advanced bus (IAdvancedBus) via the Advanced property on IBus: var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced; Sometimes something can seem like a good idea at the time, and then later you think, “WTF! Why on earth did I do that?” It happens to me all the time. I thought it would be cool if I created the exchange-binding-queue topology and then passed it to the publish and subscribe methods, which would then internally declare the exchanges and queues and do the binding. I implemented a tasty little visitor pattern in my ITopologyVisitor. I optimized for my own programming pleasure, rather than an a simple, obvious, easy-to-understand API. I realized a while ago that a more straightforward set of declares on IAdvancedBus would be a far more obvious and intentional design. To this end, I’ve refactored the advanced bus to separate declares from publishing and consuming. I just pushed the changes to NuGet and have also updated the Advanced Bus documentation. Note that these are breaking changes, so please be careful if you are upgrading to the latest version, 0.12, and upwards. Here is a taste of how it works: Declare a queue, exchange and binding, and consume raw message bytes: var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced; var queue = advancedBus.QueueDeclare("my_queue"); var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct); advancedBus.Bind(exchange, queue, "routing_key"); advancedBus.Consume(queue, (body, properties, info) => Task.Factory.StartNew(() => { var message = Encoding.UTF8.GetString(body); Console.Out.WriteLine("Got message: '{0}'", message); })); Note that I’ve renamed ‘Subscribe’ to ‘Consume’ to better reflect the underlying AMQP method. Declare an exchange and publish a message: var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced; var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct); using (var channel = advancedBus.OpenPublishChannel()) { var body = Encoding.UTF8.GetBytes("Hello World!"); channel.Publish(exchange, "routing_key", new MessageProperties(), body); } You can also delete exchanges, queues and bindings: var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced; // declare some objects var queue = advancedBus.QueueDeclare("my_queue"); var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct); var binding = advancedBus.Bind(exchange, queue, "routing_key"); // and then delete them advancedBus.BindingDelete(binding); advancedBus.ExchangeDelete(exchange); advancedBus.QueueDelete(queue); advancedBus.Dispose(); I think these changes make for a much better advanced API. Have a look at the documentation for the details.
September 13, 2013
· 11,112 Views
article thumbnail
Automating Nginx Reverse Proxy Configuration
It’s really nice if you can decouple your external API from the details of application segregation and deployment. In a previous post I explained some of the benefits of using a reverse proxy. On my current project we’ve building a distributed service oriented architecture that also exposes an HTTP API, and we’re using a reverse proxy to route requests addressed to our API to individual components. We have chosen the excellent Nginx web server to serve as our reverse proxy; it’s fast, reliable and easy to configure. We use it to aggregate multiple services exposing HTTP APIs into a single URL space. So, for example, when you type: http://api.example.com/product/pinstripe_suit It gets routed to: http://10.0.1.101:8001/product/pinstripe_suit But when you go to: http://api.example.com/customer/103474783 It gets routed to http://10.0.1.104:8003/customer/103474783 To the consumer of the API it appears that they are exploring a single URL space (http://api.example.com/blah/blah), but behind the scenes the different top level segments of the URL route to different back end servers. /product/… routes to 10.0.1.101:8001, but /customer/… routes to 10.0.1.104:8003. We also want this to be self-configuring. So, say I want to create a new component of the system that records stock levels. Rather than extending an existing component, I want to be able to write a stand-alone executable or service that exposes an HTTP endpoint, have it be automatically deployed to one of the hosts in my cloud infrastructure, and have Nginx automatically route requests addressed http://api.example.com/stock/whatever to my new component. We also want to load balance these back end services. We might want to deploy several instances of our new stock API and have Nginx automatically round robin between them. We call each top level segment ( /stock, /product, /customer ) a claim. A component publishes an ‘AddApiClaim’ message over RabbitMQ when it comes on line. This message has 3 fields: ‘Claim', ‘ipAddress’, and ‘PortNumber’. We have a special component, ProxyAutomation, that subscribes to these messages and rewrites the Nginx configuration as required. It uses SSH and SCP to log into the Nginx server, transfer the various configuration files, and instruct Nginx to reload its configuration. We use the excellent SSH.NET library to automate this. A really nice thing about Nginx configuration is wildcard includes. Take a look at our top level configuration file: ... http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; } Line 16 says, take any *.conf file in the conf.d directory and add it here. Inside conf.d is a single file for all api.example.com requests: include /etc/nginx/conf.d/api.example.com.conf.d/upstream.*.conf; server { listen 80; server_name api.example.com; include /etc/nginx/conf.d/api.example.com.conf.d/location.*.conf; location / { root /usr/share/nginx/api.example.com; index index.html index.htm; } } This is basically saying listen on port 80 for any requests with a host header ‘api.example.com’. This has two includes. The first one at line 1, I’ll talk about later. At line 7 it says ‘take any file named location.*.conf in the subdirectory ‘api.example.com.conf.d’ and add it to the configuration. Our proxy automation component adds new components (AKA API claims) by dropping new location.*.conf files in this directory. For example, for our stock component it might create a file, ‘location.stock.conf’, like this: location /stock/ { proxy_pass http://stock; } This simply tells Nginx to proxy all requests addressed to api.example.com/stock/… to the upstream servers defined at ‘stock’. This is where the other include mentioned above comes in, ‘upstream.*.conf’. The proxy automation component also drops in a file named upstream.stock.conf that looks something like this: upstream stock { server 10.0.0.23:8001; server 10.0.0.23:8002; } This tells Nginx to round-robin all requests to api.example.com/stock/ to the given sockets. In this example it’s two components on the same machine (10.0.0.23), one on port 8001 and the other on port 8002. As instances of the stock component get deployed, new entries are added to upstream.stock.conf. Similarly, when components get uninstalled, the entry is removed. When the last entry is removed, the whole file is also deleted. This infrastructure allows us to decouple infrastructure configuration from component deployment. We can scale the application up and down by simply adding new component instances as required. As a component developer, I don’t need to do any proxy configuration, just make sure my component publishes add and remove API claim messages and I’m good to go.
June 19, 2013
· 58,531 Views
article thumbnail
Using SSH.NET
I’ve recently had the need to automate configuration of Nginx on an Ubuntu server. Of course, in UNIX land we like to use SSH (Secure Shell) to log into our servers and manage them remotely. Wouldn’t it be nice, I thought, if there was a managed SSH library somewhere so that I could automate logging onto my Ubuntu server, run various commands and transfer files. A short Google turned up SSH.NET by the somewhat mysterious Olegkap (at least I couldn’t find out anything else about them) which turned out to be just what I wanted. Here’s the blurb on the CodePlex site: “This project was inspired by Sharp.SSH library which was ported from java and it seems like was not supported for quite some time. This library is complete rewrite using .NET 4.0, without any third party dependencies and to utilize the parallelism as much as possible to allow best performance I can get.” It does exactly what it says on the tin. It’s on NuGet, so you can grab it with: PM> Install-Package SSH.NET Here’s how you run a remote command. First you need to build a ConnectionInfo object: public ConnectionInfo CreateConnectionInfo() { const string privateKeyFilePath = @"C:\some\private\key.pem"; ConnectionInfo connectionInfo; using (var stream = new FileStream(privateKeyFilePath, FileMode.Open, FileAccess.Read)) { var privateKeyFile = new PrivateKeyFile(stream); AuthenticationMethod authenticationMethod = new PrivateKeyAuthenticationMethod("ubuntu", privateKeyFile); connectionInfo = new ConnectionInfo( "my.server.com", "ubuntu", authenticationMethod); } return connectionInfo; } Then you simply create an SshClient instance and run commands: public void Connect() { using (var ssh = new SshClient(CreateConnectionInfo())) { ssh.Connect(); var command = ssh.CreateCommand("uptime"); var result = command.Execute(); Console.Out.WriteLine(result); ssh.Disconnect(); } } Here I’m running the ‘uptime’ command which output this when I ran it just now: 14:37:46 up 22 days, 3:59, 0 users, load average: 0.08, 0.03, 0.05 To transfer a file, just use the ScpClient: public void GetConfigurationFiles() { using (var scp = new ScpClient(CreateNginxServerConnectionInfo())) { scp.Connect(); scp.Download("/etc/nginx/", new DirectoryInfo(@"D:\Temp\ScpDownloadTest")); scp.Disconnect(); } } Which grabs all my Nginx configuration and transfers it to a directory tree on my windows machine. All in all a very nice little library that’s been working well for me so far. Give it a try if you need to interact with a UNIX-like machine from .NET code.
June 9, 2013
· 30,331 Views
article thumbnail
The Benefits of a Reverse Proxy
A typical ASP.NET public website hosted on IIS is usually configured in such a way that the server that IIS is installed on is visible to the public internet. HTTP requests from a browser or web service client are routed directly to IIS which also hosts the ASP.NET worker process. All the functionality needed to produce the web site is embodied in a single server. This includes caching, SSL termination, authentication, serving static files and compression. This approach is simple and straightforward for small sites, but is hard to scale, both in terms of performance, and in terms of managing the complexity of a large complex application. This is especially true if you have a distributed service oriented architecture with multiple HTTP endpoints that appear and disappear frequently. A reverse proxy is server component that sits between the internet and your web servers. It accepts HTTP requests, provides various services, and forwards the requests to one or many servers. Having a point at which you can inspect, transform and route HTTP requests before they reach your web servers provides a whole host of benefits. Here are some: Load Balancing This is the reverse proxy function that people are most familiar with. Here the proxy routes incoming HTTP requests to a number of identical web servers. This can work on a simple round-robin basis, or if you have statefull web servers (it’s better not to) there are session-aware load balancers available. It’s such a common function that load balancing reverse proxies are usually just referred to as ‘load balancers’. There are specialized load balancing products available, but many general purpose reverse proxies also provide load balancing functionality. Security A reverse proxy can hide the topology and characteristics of your back-end servers by removing the need for direct internet access to them. You can place your reverse proxy in an internet facing DMZ, but hide your web servers inside a non-public subnet. Authentication You can use your reverse proxy to provide a single point of authentication for all HTTP requests. SSL Termination Here the reverse proxy handles incoming HTTPS connections, decrypting the requests and passing unencrypted requests on to the web servers. This has several benefits: Removes the need to install certificates on many back end web servers. Provides a single point of configuration and management for SSL/TLS Takes the processing load of encrypting/decrypting HTTPS traffic away from web servers. Makes testing and intercepting HTTP requests to individual web servers easier. Serving Static Content Not strictly speaking ‘reverse proxying’ as such. Some reverse proxy servers can also act as web servers serving static content. The average web page can often consist of megabytes of static content such as images, CSS files and JavaScript files. By serving these separately you can take considerable load from back end web servers, leaving them free to render dynamic content. Caching The reverse proxy can also act as a cache. You can either have a dumb cache that simply expires after a set period, or better still a cache that respects Cache-Control and Expires headers. This can considerably reduce the load on the back-end servers. Compression In order to reduce the bandwidth needed for individual requests, the reverse proxy can decompress incoming requests and compress outgoing ones. This reduces the load on the back-end servers that would otherwise have to do the compression, and makes debugging requests to, and responses from, the back-end servers easier. Centralised Logging and Auditing Because all HTTP requests are routed through the reverse proxy, it makes an excellent point for logging and auditing. URL Rewriting Sometimes the URL scheme that a legacy application presents is not ideal for discovery or search engine optimisation. A reverse proxy can rewrite URLs before passing them on to your back-end servers. For example, a legacy ASP.NET application might have a URL for a product that looks like this: http://www.myexampleshop.com/products.aspx?productid=1234 You can use a reverse proxy to present a search engine optimised URL instead: http://www.myexampleshop.com/products/1234/lunar-module Aggregating Multiple Websites Into the Same URL Space In a distributed architecture it’s desirable to have different pieces of functionality served by isolated components. A reverse proxy can route different branches of a single URL address space to different internal web servers. For example, say I’ve got three internal web servers: http://products.internal.net/ http://orders.internal.net/ http://stock-control.internal.net/ I can route these from a single external domain using my reverse proxy: http://www.example.com/products/ -> http://products.internal.net/ http://www.example.com/orders/ -> http://orders.internal.net/ http://www.example.com/stock/ -> http://stock-control.internal.net/ To an external customer it appears that they are simply navigating a single website, but internally the organisation is maintaining three entirely separate sites. This approach can work extremely well for web service APIs where the reverse proxy provides a consistent single public facade to an internal distributed component oriented architecture. So … So, a reverse proxy can off load much of the infrastructure concerns of a high-volume distributed web application. We’re currently looking at Nginx for this role. Expect some practical Nginx related posts about how to do some of this stuff in the very near future. Happy proxying!
May 6, 2013
· 31,768 Views · 2 Likes
article thumbnail
A C# .NET Client Proxy for RabbitMQ Management API
RabbitMQ comes with a very nice Management UI and a HTTP JSON API, that allows you to configure and monitor your RabbitMQ broker. From the website: “The rabbitmq-management plugin provides an HTTP-based API for management and monitoring of your RabbitMQ server, along with a browser-based UI and a command line tool, rabbitmqadmin. Features include: Declare, list and delete exchanges, queues, bindings, users, virtual hosts and permissions. Monitor queue length, message rates globally and per channel, data rates per connection, etc. Send and receive messages. Monitor Erlang processes, file descriptors, memory use. Export / import object definitions to JSON. Force close connections, purge queues.” Wouldn’t it be cool if you could do all these management tasks from your .NET code? Well now you can. I’ve just added a new project to EasyNetQ called EasyNetQ.Management.Client. This is a .NET client-side proxy for the HTTP-based API. It’s on NuGet, so to install it, you simply run: PM> Install-Package EasyNetQ.Management.Client To give an overview of the sort of things you can do with EasyNetQ.Client.Management, have a look at this code. It first creates a new Virtual Host and a User, and gives the User permissions on the Virtual Host. Then it re-connects as the new user, creates an exchange and a queue, binds them, and publishes a message to the exchange. Finally it gets the first message from the queue and outputs it to the console. var initial = new ManagementClient("http://localhost", "guest", "guest"); // first create a new virtual host var vhost = initial.CreateVirtualHost("my_virtual_host"); // next create a user for that virutal host var user = initial.CreateUser(new UserInfo("mike", "topSecret")); // give the new user all permissions on the virtual host initial.CreatePermission(new PermissionInfo(user, vhost)); // now log in again as the new user var management = new ManagementClient("http://localhost", user.name, "topSecret"); // test that everything's OK management.IsAlive(vhost); // create an exchange var exchange = management.CreateExchange(new ExchangeInfo("my_exchagne", "direct"), vhost); // create a queue var queue = management.CreateQueue(new QueueInfo("my_queue"), vhost); // bind the exchange to the queue management.CreateBinding(exchange, queue, new BindingInfo("my_routing_key")); // publish a test message management.Publish(exchange, new PublishInfo("my_routing_key", "Hello World!")); // get any messages on the queue var messages = management.GetMessagesFromQueue(queue, new GetMessagesCriteria(1, false)); foreach (var message in messages) { Console.Out.WriteLine("message.payload = {0}", message.payload); } This library is also ideal for monitoring queue levels, channels and connections on your RabbitMQ broker. For example, this code prints out details of all the current connections to the RabbitMQ broker: var connections = managementClient.GetConnections(); foreach (var connection in connections) { Console.Out.WriteLine("connection.name = {0}", connection.name); Console.WriteLine("user:\t{0}", connection.client_properties.user); Console.WriteLine("application:\t{0}", connection.client_properties.application); Console.WriteLine("client_api:\t{0}", connection.client_properties.client_api); Console.WriteLine("application_location:\t{0}", connection.client_properties.application_location); Console.WriteLine("connected:\t{0}", connection.client_properties.connected); Console.WriteLine("easynetq_version:\t{0}", connection.client_properties.easynetq_version); Console.WriteLine("machine_name:\t{0}", connection.client_properties.machine_name); } On my machine, with one consumer running it outputs this: connection.name = [::1]:64754 -> [::1]:5672 user: guest application: EasyNetQ.Tests.Performance.Consumer.exe client_api: EasyNetQ application_location: D:\Source\EasyNetQ\Source\EasyNetQ.Tests.Performance.Consumer\bin\Debug connected: 14/11/2012 15:06:19 easynetq_version: 0.9.0.0 machine_name: THOMAS You can see the name of the application that’s making the connection, the machine it’s running on and even its location on disk. That’s rather nice. From this information it wouldn’t be too hard to auto-generate a complete system diagram of your distributed messaging application. Now there’s an idea :)
December 7, 2012
· 7,190 Views
article thumbnail
EasyNetQ Cluster Support
EasyNetQ, my super simple .NET API for RabbitMQ, now (from version 0.7.2.34) supports RabbitMQ clusters without any need to deploy a load balancer. Simply list the nodes of the cluster in the connection string ... var bus = RabbitHutch.CreateBus("host=ubuntu:5672,ubuntu:5673"); In this example I have set up a cluster on a single machine, 'ubuntu', with node 1 on port 5672 and node 2 on port 5673. When the CreateBus statement executes, EasyNetQ will attempt to connect to the first host listed (ubuntu:5672). If it fails to connect it will attempt to connect to the second host listed (ubuntu:5673). If neither node is available it will sit in a re-try loop attempting to connect to both servers every five seconds. It logs all this activity to the registered IEasyNetQLogger. You might see something like this if the first node was unavailable: DEBUG: Trying to connect ERROR: Failed to connect to Broker: 'ubuntu', Port: 5672 VHost: '/'. ExceptionMessage: 'None of the specified endpoints were reachable' DEBUG: OnConnected event fired INFO: Connected to RabbitMQ. Broker: 'ubuntu', Port: 5674, VHost: '/' If the node that EasyNetQ is connected to fails, EasyNetQ will attempt to connect to the next listed node. Once connected, it will re-declare all the exchanges and queues and re-start all the consumers. Here's an example log record showing one node failing then EasyNetQ connecting to the other node and recreating the subscribers: INFO: Disconnected from RabbitMQ Broker DEBUG: Trying to connect DEBUG: OnConnected event fired DEBUG: Re-creating subscribers INFO: Connected to RabbitMQ. Broker: 'ubuntu', Port: 5674, VHost: '/' You get automatic fail-over out of the box. That’s pretty cool. If you have multiple services using EasyNetQ to connect to a RabbitMQ cluster, they will all initially connect to the first listed node in their respective connection strings. For this reason the EasyNetQ cluster support is not really suitable for load balancing high throughput systems. I would recommend that you use a dedicated hardware or software load balancer instead, if that’s what you want.
October 14, 2012
· 6,327 Views
article thumbnail
Parsing a Connection String With 'Sprache' C# Parser
Sprache is a very cool lightweight parser library for C#. Today I was experimenting with parsing EasyNetQ connection strings, so I thought I’d have a go at getting Sprache to do it. An EasyNetQ connection string is a list of key-value pairs like this: key1=value1;key2=value2;key3=value3 The motivation for looking at something more sophisticated than simply chopping strings based on delimiters, is that I’m thinking of having more complex values that would themselves need parsing. But that’s for the future, today I’m just going to parse a simple connection string where the values can be strings or numbers (ushort to be exact). So, I want to parse a connection string that looks like this: virtualHost=Copa;username=Copa;host=192.168.1.1;password=abc_xyz;port=12345;requestedHeartbeat=3 … into a strongly typed structure like this: public class ConnectionConfiguration : IConnectionConfiguration { public string Host { get; set; } public ushort Port { get; set; } public string VirtualHost { get; set; } public string UserName { get; set; } public string Password { get; set; } public ushort RequestedHeartbeat { get; set; } } I want it to be as easy as possible to add new connection string items. First let’s define a name for a function that updates a ConnectionConfiguration. A uncommonly used version of the ‘using’ statement allows us to give a short name to a complex type: using UpdateConfiguration = Func; Now lets define a little function that creates a Sprache parser for a key value pair. We supply the key and a parser for the value and get back a parser that can update the ConnectionConfiguration. public static Parser BuildKeyValueParser( string keyName, Parser valueParser, Expression> getter) { return from key in Parse.String(keyName).Token() from separator in Parse.Char('=') from value in valueParser select (Func)(c => { CreateSetter(getter)(c, value); return c; }); } The CreateSetter is a little function that turns a property expression (like x => x.Name) into an Action. Next let’s define parsers for string and number values: public static Parser Text = Parse.CharExcept(';').Many().Text(); public static Parser Number = Parse.Number.Select(ushort.Parse); Now we can chain a series of BuildKeyValueParser invocations and Or them together so that we can parse any of our expected key-values: public static Parser Part = new List> { BuildKeyValueParser("host", Text, c => c.Host), BuildKeyValueParser("port", Number, c => c.Port), BuildKeyValueParser("virtualHost", Text, c => c.VirtualHost), BuildKeyValueParser("requestedHeartbeat", Number, c => c.RequestedHeartbeat), BuildKeyValueParser("username", Text, c => c.UserName), BuildKeyValueParser("password", Text, c => c.Password), }.Aggregate((a, b) => a.Or(b)); Each invocation of BuildKeyValueParser defines an expected key-value pair of our connection string. We just give the key name, the parser that understands the value, and the property on ConnectionConfiguration that we want to update. In effect we’ve defined a little DSL for connection strings. If I want to add a new connection string value, I simply add a new property to ConnectionConfiguration and a single line to the above code. Now lets define a parser for the entire string, by saying that we’ll parse any number of key-value parts: public static Parser> ConnectionStringBuilder = from first in Part from rest in Parse.Char(';').Then(_ => Part).Many() select Cons(first, rest); All we have to do now is parse the connection string and apply the chain of update functions to a ConnectionConfiguration instance: public IConnectionConfiguration Parse(string connectionString) { var updater = ConnectionStringGrammar.ConnectionStringBuilder.Parse(connectionString); return updater.Aggregate(new ConnectionConfiguration(), (current, updateFunction) => updateFunction(current)); } We get lots of nice things out of the box with Sprache, one of the best is the excellent error messages: Parsing failure: unexpected 'x'; expected host or port or virtualHost or requestedHeartbeat or username or password (Line 1, Column 1). Sprache is really nice for this kind of task. I’d recommend checking it out.
October 3, 2012
· 6,902 Views
article thumbnail
When Should I Use An ORM?
I think like everyone, I go through the same journey whenever I find out about a new technology.. Huh? –> This is really cool –> I use it everywhere –> Hmm, sometimes it’s not so great Remember when people were writing websites with XSLT transforms? Yes, exactly. XML is great for storing a data structure as a string, but you really don’t want to be coding your application’s business logic with it. I’ve gone through a similar journey with Object Relational Mapping tools. After hand-coding my DALs, then code generating them, ORMs seemed like the answer to all my problems. I became an enthusiastic user of NHibernate through a number of large enterprise application builds. Even today I would still use an ORM for most classes of enterprise application. However there are some scenarios where ORMs are best avoided. Let me introduce my easy to use, ‘when to use an ORM’ chart. It’s got two axis, ‘Model Complexity’ and ‘Throughput’. The X-axis, Model Complexity, describes the complexity of your domain model; how many entities you have and how complex their relationships are. ORMs excel at mapping complex models between your domain and your database. If you have this kind of model, using an ORM can significantly speed up and simplify your development time and you’d be a fool not to use one. The problem with ORMs is that they are a leaky abstraction. You can’t really use them and not understand how they are communicating with your relational model. The mapping can be complex and you have to have a good grasp of both your relational database, how it responds to SQL requests, and how your ORM comes to generate both the relational schema and the SQL that talks to it. Thinking of ORMs as a way to avoid getting to grips with SQL, tables, and indexes will only lead to pain and suffering. Their benefit is that they automate the grunt work and save you the boring task of writing all that tedious CRUD column to property mapping code. The Y-axis in the chart, Throughput, describes the transactional throughput of your system. At very high levels, hundreds of transactions per second, you need hard-core DBA foo to get out of the deadlocked hell where you will inevitably find yourself. When you need this kind of scalability you can’t treat your ORM as anything other than a very leaky abstraction. You will have to tweak both the schema and the SQL it generates. At very high levels you’ll need Ayende level NHibernate skills to avoid grinding to a halt. If you have a simple model, but very high throughput, experience tells me that an ORM is more trouble than it’s worth. You’ll end up spending so much time fine tuning your relational model and your SQL that it simply acts as an unwanted obfuscation layer. In fact, at the top end of scalability you should question the choice of a relational ACID model entirely and consider an eventually-consistent event based architecture. Similarly, if your model is simple and you don’t have high throughput, you might be better off using a simple data mapper like SimpleData. So, to sum up, ORMs are great, but think twice before using one where you have a simple model and high throughput.
June 25, 2012
· 18,577 Views
article thumbnail
EasyNetQ, a simple .NET API for RabbitMQ
After pondering the results of our message queue shootout, we decided to run with Rabbit MQ. Rabbit ticks all of the boxes, it’s supported (by Spring Source and then VMware ultimately), scales and has the features and performance we need. The RabbitMQ.Client provided by Spring Source is a thin wrapper that quite faithfully exposes the AMQP protocol, so it expects messages as byte arrays. For the shootout tests spraying byte arrays around was fine, but in the real world, we want our messages to be .NET types. I also wanted to provide developers with a very simple API that abstracted away the Exchange/Binding/Queue model of AMQP and instead provides a simple publish/subscribe and request/response model. My inspiration was the excellent work done by Dru Sellers and Chris Patterson with MassTransit (the new V2.0 beta is just out). The code is on GitHub here: https://github.com/mikehadlow/EasyNetQ The API centres around an IBus interface that looks like this: /// /// Provides a simple Publish/Subscribe and Request/Response API for a message bus. /// public interface IBus : IDisposable { /// /// Publishes a message. /// /// The message type /// The message to publish void Publish(T message); /// /// Subscribes to a stream of messages that match a .NET type. /// /// The type to subscribe to /// /// A unique identifier for the subscription. Two subscriptions with the same subscriptionId /// and type will get messages delivered in turn. This is useful if you want multiple subscribers /// to load balance a subscription in a round-robin fashion. /// /// /// The action to run when a message arrives. /// void Subscribe(string subscriptionId, Action onMessage); /// /// Makes an RPC style asynchronous request. /// /// The request type. /// The response type. /// The request message. /// The action to run when the response is received. void Request(TRequest request, Action onResponse); /// /// Responds to an RPC request. /// /// The request type. /// The response type. /// /// A function to run when the request is received. It should return the response. /// void Respond(Func responder); } To create a bus, just use a RabbitHutch, sorry I couldn’t resist it :) var bus = RabbitHutch.CreateRabbitBus("localhost"); You can just pass in the name of the server to use the default Rabbit virtual host ‘/’, or you can specify a named virtual host like this: var bus = RabbitHutch.CreateRabbitBus("localhost/myVirtualHost"); The first messaging pattern I wanted to support was publish/subscribe. Once you’ve got a bus instance, you can publish a message like this: var message = new MyMessage {Text = "Hello!"}; bus.Publish(message); This publishes the message to an exchange named by the message type. You subscribe to a message like this: bus.Subscribe("test", message => Console.WriteLine(message.Text)); This creates a queue named ‘test_’ and binds it to the message type’s exchange. When a message is received it is passed to the Action delegate. If there are more than one subscribers to the same message type named ‘test’, Rabbit will hand out the messages in a round-robin fashion, so you get simple load balancing out of the box. Subscribers to the same message type, but with different names will each get a copy of the message, as you’d expect. The second messaging pattern is an asynchronous RPC. You can call a remote service like this: var request = new TestRequestMessage {Text = "Hello from the client! "}; bus.Request(request, response => Console.WriteLine("Got response: '{0}'", response.Text)); This first creates a new temporary queue for the TestResponseMessage. It then publishes the TestRequestMessage with a return address to the temporary queue. When the TestResponseMessage is received, it passes it to the Action delegate. RabbitMQ happily creates temporary queues and provides a return address header, so this was very easy to implement. To write an RPC server. Simple use the Respond method like this: bus.Respond(request => new TestResponseMessage { Text = request.Text + " all done!" }); This creates a subscription for the TestRequestMessage. When a message is received, the Func delegate is passed the request and returns the response. The response message is then published to the temporary client queue. Once again, scaling RPC servers is simply a question of running up new instances. Rabbit will automatically distribute messages to them. The features of AMQP (and Rabbit) make creating this kind of API a breeze. Check it out and let me know what you think.
May 13, 2012
· 10,618 Views
article thumbnail
TFS Build: _PublishedWebsites for exe and dll Projects
We’re using TFS on my current project. Yes, yes, I know. It’s generally good practice to collect all the code under your team’s control in a single uber-solution as described in this Patterns and Practices PDF, Team Development with TFS Guide. If you then configure the TFS build server to build this solution, it’s default behaviour is to place the build output into a single folder, ‘Release’. Any web application projects in your solution will also be output to a folder called _PublishedWebsites\. This is very nice because it means that you can simply robocopy deploy the web application. Unfortunately there’s no similar default behaviour for other project types such as WinForms, console or library. It would be very nice if we could have a _PublishedApplications\ sub folder with the output of any selected project(s). Fortunately it’s not that hard to do. The way _PublishedWebsites works is pretty simple. If you look at the project file of your web application you’ll notice an import near the bottom: On my machine the MSBuildExtensionsPath property evaluates to C:\Program Files\MSBuild, if we open the Microsoft.WebApplication.targets file we can see that it’s a pretty simple MSBuild file that recognises when the build is not a desktop build, i.e. it’s a TFS build, and copies the output to: $(OutDir)_PublishedWebsites\$(MSBuildProjectName) I simply copied the Micrsoft.WebApplication.targets file, put it under source control with a relative path from my project files and changed _PublishedWebsites to _PublishedApplications and renamed the file CI.exe.targets. For each project that I want to output to _PublishedApplications, I simply added this import at the bottom of the project file: You can edit CI.exe.targets (or whatever you want to call it) to do your bidding. In my case, the only change so far is to add a couple of lines to copy the App.config file: There’s a lot of stuff in Microsoft.WebApplication.targets that’s only relevant to web applications and can be stripped out for other project types, but I’ll leave that as an exercise for the reader. There was also a discussion on StackOverflow, with some nice alternative suggestions of how you might want to do this. It’s worth checking out.
June 12, 2009
· 10,590 Views

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: