In my last two blog posts I wrote about a library called Testcontainers. It’s a Java wrapper around docker containers and you can use it to run software that your application depends on in a test context. My last post presented a solution that makes it easier to start up an Infinispan caching server. This solution focused on a standalone Infinispan server. While this is probably good enough most of the time, sometimes you might need a server running in clustered mode. This post shows one solution for this.
Hotrod Topology State Transfer
I’ve been using the Hotrod protocol to connect clients to the server and if you’re running Infinispan in a cluster, clients will automatically be updated with new server addresses, each time a new node joins the cluster. This is called the Topology State Transfer and it also happens once initially. When you start a clustered Infinispan instance using Testcontainers (which is the default of the Docker hub Infinispan image), this means that the Infinispan node communicates its ip-address from inside the docker container to the client outside the container overwriting the previously configured connection information. I’ve covered this problem briefly in my last post. During the last week I’ve tried to find a way around this.
Configuring the Hotrod connector on the server
If you look at the default clustered configuration provided by Infinispan, you’ll find a section that configures the Hotrod connector.
It turns out you can adjust this configuration using two attributes:
external-port. Hotrod will use the values of these attributes when communicating its address during the topology state transfer. While this sounds promising at first, there’s a problem: We can use
localhost as a value for
external-host but we don’t know, which port Docker will expose externally.
Ignoring the server updates on the client
If we can’t make the server behave differently, we can try to change the client and ignore the update. Let’s look at the error message:
Jan 21, 2018 6:18:58 PM org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash
INFO: ISPN004006: localhost:33208 sent new topology view (id=1, age=0) containing 1 addresses: [172.17.0.2:11222]
Jan 21, 2018 6:18:58 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
INFO: ISPN004014: New server added(172.17.0.2:11222), adding to the pool.
Jan 21, 2018 6:19:02 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
WARN: ISPN004015: Failed adding new server 172.17.0.2:11222
org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: 172.17.0.2:11222
We can see that the error message originates from a method called
updateTopologyInfo in the
TcpTransportFactory. Looking closer, we see the method
updateServers() being called. Maybe we can create our own
TcpTransportFactory that overrides this method and ignores the updated servers. I’ve updated the existing
InfinispanContainer and created two sublcasses, so that we now have a
StandaloneInfinispanContainer and a
ClusteredInfinispanContainer. Both are subclasses of the
InfinispanContainer base class. Here’s the custom
TcpTransportFactory for the clustered container:
Our base container class configures the cache manager and uses the
TcpTransportFactory provided by the subclass. If there’s none, the default behaviour will be used as before.
With this change in place, the server can now send the updated topology information – we don’t care anymore. For a complete picture, have a look at the git repository containing the complete code. Please tell me if you have comments or suggestions on how to improve my approach!