Multi-Node Deployment
This documentation describes how to run zeknd in a multi-node setup.
#Installation
The following steps need to be executed on each node:
Choose a working directory of your choice. In this example, we are using
/home/ubuntu
Download the binaries:
Execute
./
zekndinit
in the working directory to initialize config files.Add
zeknd.yml
in the working directory:
#Configuration
There are two genesis.json files that need to be combined.
#genesis.json #1 - in the working directory
The genesis.json
that was generated should look something like below. You should NOT copy the file below as it is an example. Use the file generated in your working directory.
Next, collect all validators
from each node, combine them into an array, and save everything to a new file. This old file will now need to be replaced with the new combined file. Do this for all nodes. For a two-node cluster, the validators array should look something like this:
#genesis.json #2 - inside chaindata/config
You will find it at chaindata/config/genesis.json
. It is not to be confused with the one in the working directory. You should NOT copy the file below as it is an example and it should look something like:
Next, collect all the validators
from each node, combine them into an array, and save everything to a new file. The old file will now need to be replaced with the new combined file. Do this for all nodes. For a two-node cluster, the validators array should look something like this:
#Running
First, we need to get node keys from each node. Go to the working directory and run zeknd nodekey
:
You should see something like this:
Do remember to clearly make a note which node key is for which node. Also important is the private IP (or any IP that are available for the nodes to communicate with each other). Generally, in a cloud environment, we use public IPs for security and latency reasons.
Now, let's use an example with 4 nodes:
1
10.2.3.4
47cd3e4cc27ac621ff8bc59b776fa228adab827e
2
10.6.7.8
e728bada822af677b95cb8ff126ca72cc4e3dc74
3
10.3.2.1
4953e5726664985cc1cc92ae2edcfc6e089ba50d
4
10.7.6.5
02c90b57d241c3c014755ecb07e0c0d232e07fff
To run zeknd
, we need to tell each node about its peers. The general format is:
Let's see examples by using the table above.
On node 1:
On node 2:
The same goes for node 3 and node 4. We exclude the node's own key and IP address.
Please remember that all commands need to be executed from within the working directory.
#systemd Startup Script
The following startup script can be used to control the service using systemd. Make changes to WorkingDirectory
and/or ExecStart
to reflect your setup.
Notice ExecStart
, it is constructed using the same concept from the previous section when running zeknd
directly. This means each node has a different startup script.
Save it to /etc/systemd/system/zeknd.service
. Run these to activate it:
You may now inspect the output using:
When satisfied everything is running as intended, executing the following will enable the service so that it is started at boot:
#Verifying
#Listening Ports
If all is well, you will be able to see these ports opened in each node.
#Automation
If combining the configuration files and startup commands seems to be a lot of work, we have a way to automate it using Ansible.
Note that Ansible needs to be installed locally.
The playbook is available here.
You will need to change the inventory to match your nodes and preferred working directory.
Please ensure SSH and sudo access to the nodes are available
#Inventory: inventory.yaml
The inventory specifies the nodes and their IP addresses. If the node only has one IP, use the same for both ansible_host
and private_ip
. ansible_host
is used by Ansible to connect to the host, while private_ip
is used by the nodes to communicate with each other.
After modifying the inventory with the details of the nodes, execute the playbook:
#More Automation: Vagrant
There is also a Vagrantfile included to provision a full cluster. Ansible needs to be installed on the host machine.
It is tested with VirtualBox provider. It takes less than two minutes on a decent machine to create and provision 4 nodes.
The following variables may be changed when needed:
Note: Vagrant creates its own inventory so inventory.yml
is not used.
Last updated