Skip to main content

· 3 min read

Here at Rabbit HQ we've been enjoying "RabbitMQ in Action", the introduction to RabbitMQ and messaging.  Part of the Manning series, the book is written by Jason Williams and Alvaro Videla, both well known for their many contributions to the Rabbit community.

Today we'd like to say thank-you to Jason and Alvaro.  Thank-you Jason and Alvaro!  You did an amazing job and infinite beers are on us.

But there's more...  Manning have kindly offered a promotional discount of 37% to readers of this blog.  All is revealed below, in a guest post by Jason Williams himself...

· 4 min read

For quite a while here, at RabbitMQ headquarters, we were struggling to find a good way to expose messaging in a web browser. In the past we tried many things ranging from the old-and-famous JsonRPC plugin (which basically exposes AMQP via AJAX), to Rabbit-Socks (an attempt to create a generic protocol hub), to the management plugin (which can be used for basic things like sending and receiving messages from the browser).

Over time we've learned that the messaging on the web is very different to what we're used to. None of our attempts really addressed that, and it is likely that messaging on the web will not be a fully solved problem for some time yet.

That said, there is a simple thing RabbitMQ users keep on asking about, and although not perfect, it's far from the worst way do messaging in the browser: exposing STOMP through Websockets.

· 12 min read

You have a queue in Rabbit. You have some clients consuming from that queue. If you don't set a QoS setting at all (basic.qos), then Rabbit will push all the queue's messages to the clients as fast as the network and the clients will allow. The consumers will balloon in memory as they buffer all the messages in their own RAM. The queue may appear empty if you ask Rabbit, but there may be millions of messages unacknowledged as they sit in the clients, ready for processing by the client application. If you add a new consumer, there are no messages left in the queue to be sent to the new consumer. Messages are just being buffered in the existing clients, and may be there for a long time, even if there are other consumers that become available to process such messages sooner. This is rather sub optimal.

So, the default QoS prefetch setting gives clients an unlimited buffer, and that can result in poor behaviour and performance. But what should you set the QoS prefetch buffer size to? The goal is to keep the consumers saturated with work, but to minimise the client's buffer size so that more messages stay in Rabbit's queue and are thus available for new consumers or to just be sent out to consumers as they become free.

· 7 min read

Welcome back! Last time we talked about flow control and latency; today let's talk about how different features affect the performance we see. Here are some simple scenarios. As before, they're all variations on the theme of one publisher and one consumer publishing as fast as they can.

· 2 min read

Over the weekend, RabbitMQ co-sponsored London Realtime, two nights and two days of unadulterated hackery. It was all put on by the apparently indefatigable* crew at GoSquared, a very impressive debut effort.

As a co-sponsor we had one of the iPad prizes to award. We decided to allow hacks that used one or more of RabbitMQ, SockJS, or Cloud Foundry. This meant that about half of the twenty-seven hacks were eligible when it came to judging, making the choice rather difficult.

· 6 min read

So today I would like to talk about some aspects of RabbitMQ's performance. There are a huge number of variables that feed into the overall level of performance you can get from a RabbitMQ server, and today we're going to try tweaking some of them and seeing what we can see.

· 4 min read

Or: How to properly do multiplexing on WebSockets or on SockJS

As you may know, WebSockets are a cool new HTML5 technology which allows you to asynchronously send and receive messages. Our compatibility layer - SockJS - emulates it and will work even on old browsers or behind proxies. WebSockets conceptually are very simple. The API is basically: connect, send and receive. But what if your web-app has many modules and every one wants to be able to send and receive data?

· 3 min read

AtomizeJS is a JavaScript library for writing distributed programs, that run in the browser, without having to write any application specific logic on the server.

Here at RabbitMQ HQ we spend quite a lot of time arguing. Occasionally, it's about important things, like what messaging really means, and the range of different APIs that can be used to achieve messaging. RabbitMQ and AMQP present a very explicit interface to messaging: you very much have verbs send and receive and you need to think about what your messaging patterns are. There's a lot (of often quite clever stuff) going on under the bonnet but nevertheless, the interface is quite low-level and explicit, which gives a good degree of flexibility. Sometimes though, that style of API is not the most natural fit for the problem you're trying to solve - do you really reach an impasse and think "What I need here is an AMQP-message broker", or do you, from pre-existing knowledge, realise that you could choose to use an AMQP-message broker to solve your current problem?

· 9 min read

The previous release of RabbitMQ (2.7.0) brought with it a better way of managing plugins, one-stop URI connecting by clients, thread-safe consumers in the Java client, and a number of performance improvements and bug-fixes. The latest release (2.7.1) is essentially a bug-fix release; though it also makes RabbitMQ compatible with Erlang R15B and enhances some of the management interface. The previous release didn't get a blog post, so I've combined both releases in this one.  (These are my own personal remarks and are NOT binding; errors of commission or omission are entirely my own -- Steve Powell.)