This CloudHub platform service arrived in the CloudHub R20 release, harnessing Mule’s Object Store capabilities. Each CloudHub integration application is given it’s own storage, with zero configuration required. This makes it a extremely easy to implement two very important integration scenarios...
Celery has good support for a variety of different message brokers – RabbitMQ, Redis, SQS, etc. – but support for result storage is somewhat more limited. Celery-S3 lets you store Celery results in an S3 bucket, which means you can run a fully-functioning Celery installation on AWS with nothing but a Python install.
I really like seeing OAuth in action. It makes me really evaluate how my network of cloud services can work together. OAuth and reciprocity services like Zapier help me integrate, automate and make my world go around in a secure way I can depend on.
Recently, I was asked what the differences between software architecture and software design are. At a very superficial level both architecture and design seem to mean relatively the same thing. However, if we examine both of these terms further we will find that they are in fact very different due to the level of details they encompass.
A common perception of APIs is that the majority of APIs are run by web technology startups. However, in the sample, only 17% of the companies providing APIs were web technology startups by our definition of less than 3 years old with a primarily digital / web based product).
I have come across several issues where people were having trouble configuring the Sift file appender in ServiceMix to enable per bundle logging. Specifically, issues arose when trying to configure a rolling log file appender for sift.
I recently thought about to create a process with which I could keep two instances of WordPress in sync. I wanted to see if I could do this with Mule. I came up with the following solution:
While writing some tests for an Apache Camel project, I just spent rather longer than I'd have liked trying to work out how to configure a connection bean for Mongo without using Spring. Since I hide my embarrassments in public I thought I'd best share with anyone else with brain freeze.
See how Nagios and Gearman, an (a)synchronous job queue that can help your website scale, work together in simple and large scale architectures.
You can route traffic to a JSP using axis2.xml in WSO2 ESB. Here's the configuration in the main sequence.
I've shown how to customize the URL of a proxy service, now lets try to access the WSDL.
Here I am using WSO2-IS-4.0.0 and withApache Directory Studio to browse it. Check out this easy, six-step process with screenshots to guide you.
I upgraded PHP and related pecl modules on my development machine today, and ran into a problem with Gearman. Actually I ran into more than one! Firstly the challenge of getting the newest pecl version working with a gearman version.
We use currently a stored procedure API divided into an upper and lower half, The upper half generates an API call for the lower half, which can be called independently when finer control is needed.
We all love wizards.... (Software wizards I mean). We are always happy to jump on those ''Next" buttons like we were dancing the funky chicken on our… well you get the point.
Mylaensys have just publicised on their blog the fact that they are working on an integration between DHTMLX and Apache Isis.
In order to receive JMS messages, Spring provides the concept of message listener containers. These are beans that can be tied to receive messages that arrive at certain destinations. This post will examine the different ways in which containers can be configured.
You may be familiar with the free Mule ESB community edition, but I’d like to take a minute to introduce you to MuleSoft and our Enterprise version of Mule, I think its important to understand everything that mule can offer developers. Why do companies rely so heavily on Mule?
APIs came more and more to the forefront of tech-thinking in 2012 and there was some great thinking about API trends and the Web more generally.
A comment on my previous blog post said that they refer to 'legacy' systems as 'mature' as this has more positive connotations. I think it also accurately reflects legacy as being a stage of a lifecycle rather than any particular indication of quality.
Writing and contributing to a specification is one thing. Working with it and looking into real examples is a pre-condition if you want to give valuable feedback. I thought it might be a good time to give the Java API for Processing JSON (JSON-P) a test drive.
It turns out that over the last couple of years, as the prominence of remote APIs, specially REST based ones, has increased the need for protected methods to access your API.
Most systems don't have a permanent programmer presence. A brilliant system form 15 years man now be called "legacy". But does "legacy" necessarily equal "bad"?
Because quality is defined by project requirements the meaning of quality is constantly changing base on the project.
We just had two amazing days at API Strategy and Practice in New York with great conversations! A full review coming up – but on the way home we just wanted to share a few technical gems which stood out from the sessions...