Making socket.io Work With Multiple Nodes

Socket.io is the most popular websocket implementation library that is available to the application develeper. However for a long time it was full of bugs, had a lot of architectural issues and is not maintained. But changing all that socket.io version 1.0.0 came out. At this time the latest stable socket.io version is 1.1.0 and its so much improved.

If you are excited about the websockets technology and socket.io helped you to explore it, you'd be delighted to here about this all new rebirth of it. But trying to migrate to the new version from the old one is where you'd lose most of that delight. Specially if you have multiple nodes running on your server for load balancing.

When multiple nodes are running on the server side they collectively are responsible for handling the socket.io clients. Clients would not have much idea of which server node they are dealing with. This means that the server nodes need some common ground to share information about the clients on, so that any one of them can handle any client. In socket.io 0.9.* this ground is given the name store. A store can be implemented using any storage technology according to a store interface. The redis-store was the most used.

There are many fundamental problems with this architecture. One of the main ones being that the store used will contain every single details about every single client that connects. This makes drastically decreases the possibility of horizontal scaling. It would work great for few nodes with limited number of subscribed clients but when the number of clients touch millions this should give a lot of problems. Another is that it is not possible to add new nodes to the cluster without taking the whole cluster down. This is because new nodes do not update with the data available with already running nodes and are unable to handle requests from the existing clients.

So they have removed 'stores' from the new socket.io version and rightly so.

The successor of the redis-store will be redis-adapter. Here is how my diff looked like after the substitution of redis-adapter instead of the redis-store.

     var sio = require('socket.io');
     io = sio.listen(server);
 
-    var subscriber = redis.createClient(
-                         config.redisPort, config.redisHost, config.redisOptions);
-    var publisher = redis.createClient(
-                         config.redisPort, config.redisHost, config.redisOptions);
 
-    var RedisStore = require("socket.io-redis");
 
-    io.set('store', new RedisStore(
-             {pubClient:publisher, subClient:subscriber, host:config.redisHost,port:config.redisPort}));


+    var redisadapter = require('socket.io-redis');
+    io.adapter(redisadapter({ host: config.redisHost, port: config.redisPort }));

But the migration does not end here. The new socket.io requires the nodes to have sticky sessions in order to operate.

Sticky sessions ensures that a subsequent request would be forwarded to the same node that handled the previous requests corresponding to that request. So IP address based sticky sessions make sure that all the requests from a particular IP address is sent to the same node.

How you should implement sticky sessions depends on the technology you use in the load balancer. If you are using Nginx it can be configured in the setup. Or if you are using pm2 you are not that lucky (yet).

Or it is possible that you use the node cluster module for the load balancing. In that case 'sticky-session' node module should give you a hand. But its still not very mature and could have many more features. Anyway it works.

Wrapping the server instance in sticky function should do it.

+    var sticky = require('sticky-session');

-    var server = http.createServer(handler);
+    var server = sticky(http.createServer(handler));

And now socket.io 1.1.0 starts working! Its really not that difficult but there are not much help around the internet to the migrater. Once many stackoverflow questions around are answered and many new tutorials are put up socket.io would be great to work with.