Subscribe to access all episodes. View plans →
Published April 12, 2021
Elixir 1.10
Libcluster 3.2
Here I have an Elixir application that uses Phoenix. As you can see from the URL it’s currently deployed to Gigalixir. I’ve deployed this according to episode 112 which uses Distillery. If you’re unfamiliar with how to deploy your app to Gigalixir check out that episode.
Now that our application is deployed and the traffic is increasing let’s update it to use clustered deployment. This will help provide redundancy and give our application higher availability. And it’s incredibly easy to cluster your Elixir application on Gigalixir with the libcluster
package.
libcluster
provides the mechanism for automatically forming clusters of Erlang nodes and has some really great features like automatic cluster formation and healing.
Let’s get started. We’ll go to hex.pm and copy the libcluster
config. Then we’ll open our application’s Mixfile
and add libcluster
to our list of dependencies.
mix.exs
...
defp deps do
...
{:libcluster, "~> 3.2"},
...
end
...
With it added we can go to the command line and run mix deps.get
.
$ mix deps.get
...
Once it’s installed let’s open our config/prod.exs
and here we’ll add our libcluster
config. I’ll paste in the structure of our configuration. Now let’s fill this in. We’ll start by specifying our strategy. We’ll use the Cluster.Strategy.Kubernetes
strategy, which is supported by Gigalixir. Then we’ll need to set the kubernetes_selector
which is set for us by Gigalixir as LIBCLUSTER_KUBERNETES_SELECTOR
and the kubernetes_node_basename
which Gigalixir sets as LIBCLUSTER_KUBERNETES_NODE_BASENAME
.
One note, I’m using Distillery environment variables here because this is deployed using Distillery. If you’re using Elixir releases you’ll want to use System.get_env
instead.
config/prod.exs
...
config :libcluster,
topologies: [
k8s_example: [
strategy: Cluster.Strategy.Kubernetes,
config: [
kubernetes_selector: "${LIBCLUSTER_KUBERNETES_SELECTOR}",
kubernetes_node_basename: "${LIBCLUSTER_KUBERNETES_NODE_BASENAME}"
]
]
]
...
Then let’s go to our application.ex
module. In our start
callback we’ll get our topologies
from the libcluster
config we just set.
Then in our list of children
we’ll include the Cluster.Supervisor
module, with our topologies
, and we’ll give it the name Teacher.ClusterSupervisor
. I’m using Teacher
since it’s the name of my application, but you’ll want to update it to be your application name.
lib/teacher/appliation.ex
...
def start(_type, _args) do
topologies = Application.get_env(:libcluster, :topologies) || []
children = [
{Cluster.Supervisor, [topologies, [name: Teacher.ClusterSupervisor]]},
...
]
...
end
...
end
...
And that’s all we need to do to set up clustering for our app. We should now be ready to deploy our changes to Gigalixir.
Let’s go to the command line and see what files have changed. It’s just the changes we made for clustering. So let’s go ahead and add all our files. Then we can commit them and deploy our changes to Gigalixir.
$ git add .
$ git commit -m "Sets up clustering"
$ git push gigalixir master
Once our code is deployed we can go back to the browser and everything looks like it’s working, but how do we know how many nodes our application has.
Let’s go back to the command line and let’s start a remote console for our deployed application by running gigalixir ps:remote_console
. Then let’s call Node.self()
- this returns the current node and we see our node is returned. Now let’s call Node.list()
- which returns a list of all visible nodes in the system, excluding the local node. And when we do we don’t get anything back. Now that we’ve set up our app for clustering we need to scale it up and add nodes to our cluster.
$ gigalixir ps:remote_console
iex> Node.self()
iex> Node.list()
[]
Let’s exit out of the console and clear the screen. Then let’s run gigalixir apps
and we see the number of replicas - 1 - is displayed. We can use the Gigalixir CLI to scale up our app. Let’s run gigalixir ps:scale --replicas=2 --size=0.6
.
$ gigalixir ps:scale --replicas=2 --size=0.6
{
"replicas": 2,
"size": 0.6
}
Running gigalixir apps
again we can see our app has been updated. Now let’s start a remote console again and calling Node.self()
and we see our node returned. But now if we call Node.list()
- perfect we see another node returned.
$ gigalixir ps:remote_console
iex> Node.self()
:"cooked-infinite-anaconda@10.56.14.147"
iex> Node.list()
[:"cooked-infinite-anaconda@10.56.21.49"]
Our app is now set up - we can add and remove nodes to it as needed. Let’s go back to our application in the browser. And if we go to the album’s show page there’s a “Like count” and if we click the “like” link the like count is incremented. Let’s take a quick look at how this works.
We’ll go back to our code and open the AlbumLive.Show
LiveView module. When we clicked that “like” link it triggers this handle_event
callback. What this does is grab the album from the socket and increments its “like_count”. Which then broadcasts the updated album to all clients subscribed to this topic which is handled by the handle_info
callback below.
Let’s test something out. We’ll open the corresponding show.html.leex
template and update it to display the current Node on the page.
Template path: lib/teacher_web/live/album_live/show.html.leex
...
<p><b>Node:</b> <%= Node.self() %></p>
...
Then let’s go to the command line commit our changes and deploy them to Gigalixir.
$ git add .
$ git commit -m "Displays current node"
$ git push gigalixir master
With it deployed, I’ll open two different windows with our album page and we can see each page is displaying a different Node.
Now if we like an album from one page. Great - the event is broadcasted and each page is updated successfully. With Gigalixir distributed Phoenix channels just work out of the box. There’s no need for extra configuration.