Searchโ€ฆ
Environment Setup

Install packages

Install the packages we will need.
1
sudo apt install build-essential libssl-dev tcptraceroute python3-pip \
2
jq make automake unzip net-tools nginx ssl-cert pkg-config \
3
libffi-dev libgmp-dev libssl-dev libtinfo-dev libsystemd-dev \
4
zlib1g-dev g++ libncursesw5 libtool autoconf -y
Copied!

Environment

Make some directories.
1
mkdir -p $HOME/.local/bin
2
mkdir -p $HOME/pi-pool/files
3
mkdir -p $HOME/pi-pool/scripts
4
mkdir -p $HOME/pi-pool/logs
5
mkdir $HOME/git
6
mkdir $HOME/tmp
Copied!

Create bash variables & add ~/.local/bin to our $PATH ๐Ÿƒ

Changes to this file require reloading .bashrc or logging out then back in.
Testnet
Mainnet
1
echo PATH="$HOME/.local/bin:$PATH" >> $HOME/.bashrc
2
echo export NODE_HOME=$HOME/pi-pool >> $HOME/.bashrc
3
echo export NODE_CONFIG=testnet >> $HOME/.bashrc
4
echo export NODE_FILES=$HOME/pi-pool/files >> $HOME/.bashrc
5
echo export NODE_BUILD_NUM=$(curl https://hydra.iohk.io/job/Cardano/iohk-nix/cardano-deployment/latest-finished/download/1/index.html | grep -e "build" | sed 's/.*build\/\([0-9]*\)\/download.*/\1/g') >> $HOME/.bashrc
6
echo export CARDANO_NODE_SOCKET_PATH="$HOME/pi-pool/db/socket" >> $HOME/.bashrc
7
source $HOME/.bashrc
Copied!
1
echo PATH="$HOME/.local/bin:$PATH" >> $HOME/.bashrc
2
echo export NODE_HOME=$HOME/pi-pool >> $HOME/.bashrc
3
echo export NODE_CONFIG=mainnet >> $HOME/.bashrc
4
echo export NODE_FILES=$HOME/pi-pool/files >> $HOME/.bashrc
5
echo export NODE_BUILD_NUM=$(curl https://hydra.iohk.io/job/Cardano/iohk-nix/cardano-deployment/latest-finished/download/1/index.html | grep -e "build" | sed 's/.*build\/\([0-9]*\)\/download.*/\1/g') >> $HOME/.bashrc
6
echo export CARDANO_NODE_SOCKET_PATH="$HOME/pi-pool/db/socket" >> $HOME/.bashrc
7
source $HOME/.bashrc
Copied!

Retrieve node files

1
cd $NODE_FILES
2
wget -N https://hydra.iohk.io/build/${NODE_BUILD_NUM}/download/1/${NODE_CONFIG}-config.json
3
wget -N https://hydra.iohk.io/build/${NODE_BUILD_NUM}/download/1/${NODE_CONFIG}-byron-genesis.json
4
wget -N https://hydra.iohk.io/build/${NODE_BUILD_NUM}/download/1/${NODE_CONFIG}-shelley-genesis.json
5
wget -N https://hydra.iohk.io/build/${NODE_BUILD_NUM}/download/1/${NODE_CONFIG}-alonzo-genesis.json
6
wget -N https://hydra.iohk.io/build/${NODE_BUILD_NUM}/download/1/${NODE_CONFIG}-topology.json
Copied!
Run the following to modify testnet-config.json and update TraceBlockFetchDecisions to "true"
1
sed -i ${NODE_CONFIG}-config.json \
2
-e "s/TraceBlockFetchDecisions\": false/TraceBlockFetchDecisions\": true/g"
Copied!
Tip for relay nodes: It's possible to reduce memory and cpu usage by setting "TraceMemPool" to "false" in mainnet-config.json. This will turn off mempool data in Grafana and gLiveView.sh.

Retrieve aarch64 binaries

The unofficial cardano-node & cardano-cli binaries available to us are being built by an IOHK engineer in his spare time. Please visit the 'Arming Cardano' Telegram group for more information.
1
cd $HOME/tmp
2
wget -O cardano_node_$(date +"%m-%d-%y").zip https://ci.zw3rk.com/build/1771/download/1/aarch64-unknown-linux-musl-cardano-node-1.29.0.zip
3
unzip *.zip
4
mv cardano-node/* $HOME/.local/bin
5
rm -r cardano*
6
cd $HOME
Copied!
If binaries already exist you will have to confirm overwriting the old ones.
Confirm binaries are in ada $PATH.
1
cardano-node version
2
cardano-cli version
Copied!

Systemd unit files

Let us now create the systemd unit file and startup script so systemd can manage cardano-node.
1
nano $HOME/.local/bin/cardano-service
Copied!
Paste the following, save & exit.
Testnet
Mainnet
1
#!/bin/bash
2
DIRECTORY=/home/ada/pi-pool
3
FILES=/home/ada/pi-pool/files
4
PORT=3003
5
HOSTADDR=0.0.0.0
6
TOPOLOGY=${FILES}/testnet-topology.json
7
DB_PATH=${DIRECTORY}/db
8
SOCKET_PATH=${DIRECTORY}/db/socket
9
CONFIG=${FILES}/testnet-config.json
10
## +RTS -N4 -RTS = Multicore(4)
11
cardano-node +RTS -N4 --disable-delayed-os-memory-return -qg -qb -c -RTS run \
12
--topology ${TOPOLOGY} \
13
--database-path ${DB_PATH} \
14
--socket-path ${SOCKET_PATH} \
15
--host-addr ${HOSTADDR} \
16
--port ${PORT} \
17
--config ${CONFIG}
Copied!
1
#!/bin/bash
2
DIRECTORY=/home/ada/pi-pool
3
FILES=/home/ada/pi-pool/files
4
PORT=3003
5
HOSTADDR=0.0.0.0
6
TOPOLOGY=${FILES}/mainnet-topology.json
7
DB_PATH=${DIRECTORY}/db
8
SOCKET_PATH=${DIRECTORY}/db/socket
9
CONFIG=${FILES}/mainnet-config.json
10
## +RTS -N4 -RTS = Multicore(4)
11
cardano-node +RTS -N4 --disable-delayed-os-memory-return -qg -qb -c -RTS run \
12
--topology ${TOPOLOGY} \
13
--database-path ${DB_PATH} \
14
--socket-path ${SOCKET_PATH} \
15
--host-addr ${HOSTADDR} \
16
--port ${PORT} \
17
--config ${CONFIG}
Copied!
Allow execution of our new startup script.
1
chmod +x $HOME/.local/bin/cardano-service
Copied!
Open /etc/systemd/system/cardano-node.service.
1
sudo nano /etc/systemd/system/cardano-node.service
Copied!
Paste the following, save & exit.
1
# The Cardano Node Service (part of systemd)
2
# file: /etc/systemd/system/cardano-node.service
3
โ€‹
4
[Unit]
5
Description = Cardano node service
6
Wants = network-online.target
7
After = network-online.target
8
โ€‹
9
[Service]
10
User = ada
11
Type = simple
12
WorkingDirectory= /home/ada/pi-pool
13
ExecStart = /bin/bash -c "PATH=/home/ada/.local/bin:$PATH exec /home/ada/.local/bin/cardano-service"
14
KillSignal=SIGINT
15
RestartKillSignal=SIGINT
16
TimeoutStopSec=3
17
LimitNOFILE=32768
18
Restart=always
19
RestartSec=5
20
#EnvironmentFile=-/home/ada/.pienv
21
โ€‹
22
[Install]
23
WantedBy= multi-user.target
Copied!
Set permissions and reload systemd so it picks up our new service file..
1
sudo systemctl daemon-reload
Copied!
Let's add a function to the bottom of our .bashrc file to make life a little easier.
1
nano $HOME/.bashrc
Copied!
1
cardano-service() {
2
#do things with parameters like $1 such as
3
sudo systemctl "$1" cardano-node.service
4
}
Copied!
Save & exit.
1
source $HOME/.bashrc
Copied!
What we just did there was add a function to control our cardano-service without having to type out
sudo systemctl enable cardano-node.service
sudo systemctl start cardano-node.service
sudo systemctl stop cardano-node.service
sudo systemctl status cardano-node.service
Now we just have to:
    cardano-service enable (enables cardano-node.service auto start at boot)
    cardano-service start (starts cardano-node.service)
    cardano-service stop (stops cardano-node.service)
    cardano-service status (shows the status of cardano-node.service)

โ›“ Syncing the chain โ›“

You are now ready to start cardano-node. Doing so will start the process of 'syncing the chain'. This is going to take about 30 hours and the db folder is about 8.5GB in size right now. We used to have to sync it to one node and copy it from that node to our new ones to save time.

Download snapshot

Do not attempt this on an 8GB sd card. Not enough space! Create your image file and flash it to your ssd.
I have started taking snapshots of my backup nodes db folder and hosting it in a web directory. With this service it takes around 15 minutes to pull the latest snapshot and maybe another 30 minutes to sync up to the tip of the chain. This service is provided as is. It is up to you. If you want to sync the chain on your own simply:
1
cardano-service enable
2
cardano-service start
3
cardano-service status
Copied!
Otherwise, be sure your node is not running & delete the db folder if it exists and download db.
1
cardano-service stop
2
cd $NODE_HOME
3
rm -r db/
Copied!
Download the DB snapshot.
Testnet
Mainnet
1
wget -r -np -nH -R "index.html*" -e robots=off https://test-db.adamantium.online/db/
Copied!
1
wget -r -np -nH -R "index.html*" -e robots=off https://db.adamantium.online/db/
Copied!
Once wget completes enable & start cardano-node.
1
cardano-service enable
2
cardano-service start
3
cardano-service status
Copied!

gLiveView.sh

Guild operators scripts have a couple of useful tools for operating a pool. We do not want the project as a whole, though there are a couple of scripts we are going to use.
guild-operators/scripts/cnode-helper-scripts at master ยท cardano-community/guild-operators
GitHub
1
cd $NODE_HOME/scripts
2
wget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/env
3
wget https://raw.githubusercontent.com/cardano-community/guild-operators/master/scripts/cnode-helper-scripts/gLiveView.sh
Copied!
We have to edit the env file to work with our environment. The port number here will have to be updated to match the port cardano-node is running on. For the Pi-Node it's port 3003. As we build the pool we will work down. For example Pi-Relay(2) will run on port 3002, Pi-Relay(1) on 3001 and Pi-Core on port 3000.
You can change the port cardano-node runs on in /home/ada/.local/bin/cardano-service.
1
sed -i env \
2
-e "s/\#CNODE_HOME=\"\/opt\/cardano\/cnode\"/CNODE_HOME=\"\home\/ada\/pi-pool\"/g" \
3
-e "s/"6000"/"3001"/g" \
4
-e "s/\#CONFIG=\"\${CNODE_HOME}\/files\/config.json\"/CONFIG=\"\${NODE_FILES}\/${NODE_CONFIG}-config.json\"/g" \
5
-e "s/\#SOCKET=\"\${CNODE_HOME}\/sockets\/node0.socket\"/SOCKET=\"\${NODE_HOME}\/db\/socket\"/g"
Copied!
Allow execution of gLiveView.sh.
1
chmod +x gLiveView.sh
Copied!

topologyUpdater.sh

Until peer to peer is enabled on the network operators need a way to get a list of relays/peers to connect to. The topology updater service runs in the background with cron. Every hour the script will run and tell the service you are a relay and want to be a part of the network. It will add your relay to it's directory after four hours and start generating a list of relays in a json file in the $NODE_HOME/logs directory. A second script, relay-topology_pull.sh can then be used manually to generate a mainnet-topolgy file with relays/peers that are aware of you and you of them.
The list generated will show you the distance in miles & a clue as to where the relay is located.
Open a file named topologyUpdater.sh
1
cd $NODE_HOME/scripts
2
nano topologyUpdater.sh
Copied!
Paste in the following, save & exit.
The port number here must match the port cardano-node is running on. If you are using dns records you can add the FQDN that matches on line 6(line 6 only). Leave it as is if you are not using dns. The service will pick up the public IP and use that.
1
#!/bin/bash
2
# shellcheck disable=SC2086,SC2034
3
โ€‹
4
USERNAME=ada
5
CNODE_PORT=3003 # must match your relay node port as set in the startup command
6
CNODE_HOSTNAME="CHANGE ME" # optional. must resolve to the IP you are requesting from
7
CNODE_BIN="/home/ada/.local/bin"
8
CNODE_HOME="/home/ada/pi-pool"
9
LOG_DIR="${CNODE_HOME}/logs"
10
GENESIS_JSON="${CNODE_HOME}/files/testnet-shelley-genesis.json"
11
NETWORKID=$(jq -r .networkId $GENESIS_JSON)
12
CNODE_VALENCY=1 # optional for multi-IP hostnames
13
NWMAGIC=$(jq -r .networkMagic < $GENESIS_JSON)
14
[[ "${NETWORKID}" = "Mainnet" ]] && HASH_IDENTIFIER="--mainnet" || HASH_IDENTIFIER="--testnet-magic ${NWMAGIC}"
15
[[ "${NWMAGIC}" = "1097911063" ]] && NETWORK_IDENTIFIER="--mainnet" || NETWORK_IDENTIFIER="--testnet-magic ${NWMAGIC}"
16
โ€‹
17
export PATH="${CNODE_BIN}:${PATH}"
18
export CARDANO_NODE_SOCKET_PATH="${CNODE_HOME}/db/socket"
19
โ€‹
20
blockNo=$(/home/ada/.local/bin/cardano-cli query tip ${NETWORK_IDENTIFIER} | jq -r .block )
21
โ€‹
22
# Note:
23
# if you run your node in IPv4/IPv6 dual stack network configuration and want announced the
24
# IPv4 address only please add the -4 parameter to the curl command below (curl -4 -s ...)
25
if [ "${CNODE_HOSTNAME}" != "CHANGE ME" ]; then
26
T_HOSTNAME="&hostname=${CNODE_HOSTNAME}"
27
else
28
T_HOSTNAME=''
29
fi
30
โ€‹
31
if [ ! -d ${LOG_DIR} ]; then
32
mkdir -p ${LOG_DIR};
33
fi
34
โ€‹
35
curl -s -f -4 "https://api.clio.one/htopology/v1/?port=${CNODE_PORT}&blockNo=${blockNo}&valency=${CNODE_VALENCY}&magic=${NWMAGIC}${T_HOSTNAME}" | tee -a "${LOG_DIR}"/topologyUpdater_lastresult.json
Copied!
Save, exit, and make it executable.
1
chmod +x topologyUpdater.sh
Copied!
You will not be able to successfully execute ./topologyUpdater.sh until you are fully synced up to the tip of the chain.
Choose nano when prompted for editor.
Create a cron job that will run the script every hour.
1
crontab -e
Copied!
Add the following to the bottom, save & exit.
The Pi-Node image has this cron entry disabled by default. You can enable it by removing the #.
1
33 * * * * /home/ada/pi-pool/scripts/topologyUpdater.sh
Copied!
After 4 hours of on boarding you will be added to the service and can pull your new list of peers into the mainnet-topology file.
Create another file relay-topology_pull.sh and paste in the following.
1
nano relay-topology_pull.sh
Copied!
1
#!/bin/bash
2
BLOCKPRODUCING_IP=<BLOCK PRODUCERS PRIVATE IP>
3
BLOCKPRODUCING_PORT=3000
4
curl -4 -s -o /home/ada/pi-pool/files/testnet-topology.json "https://api.clio.one/htopology/v1/fetch/?max=15&magic=1097911063&customPeers=${BLOCKPRODUCING_IP}:${BLOCKPRODUCING_PORT}:1"
Copied!
Save, exit and make it executable.
1
chmod +x relay-topology_pull.sh
Copied!
Pulling in a new list will overwrite your existing topology file. Keep that in mind.
After 4 hours you can pull in your new list and restart the cardano-service.
1
cd $NODE_HOME/scripts
2
./relay-topology_pull.sh
Copied!
relay-topology_pull.sh will add 15 peers to your mainnet-topology file. I usually remove the furthest 5 relays and use the closest 10.
1
nano $NODE_FILES/${NODE_CONFIG}-topology.json
Copied!
You can use gLiveView.sh to view ping times in relation to the peers in your mainnet-topology file. Use Ping to resolve hostnames to IP's.
Changes to this file will take affect upon restarting the cardano-service.
Don't forget to remove the last comma in your topology file!
Status should show as enabled & running.
Once your node syncs past epoch 208(shelley era) you can use gLiveView.sh to monitor.
It can take up to an hour for cardano-node to sync to the tip of the chain. Use ./gliveView.sh, htop and log outputs to view process. Be patient it will come up.
1
cd $NODE_HOME/scripts
2
./gLiveView.sh
Copied!

Prometheus, Node Exporter & Grafana

Prometheus connects to cardano-nodes backend and serves metrics over http. Grafana in turn can use that data to display graphs and create alerts. Our Grafana dashboard will be made up of data from our Ubuntu system & cardano-node. Grafana can display data from other sources as well, like adapools.org.
You can connect a Telegram bot to Grafana which can alert you of problems with the server. Much easier than trying to configure email alerts.
Prometheus
GitHub

Install Prometheus & Node Exporter.

Prometheus can scrape the http endpoints of other servers running node exporter. Meaning Grafana and Prometheus does not have to be installed on your core and relays. Only the package prometheus-node-exporter is required if you would like to build a central Grafana dashboard for the pool, freeing up resources.
1
sudo apt-get install -y prometheus prometheus-node-exporter
Copied!
Disable them in systemd for now.
1
sudo systemctl disable prometheus.service
2
sudo systemctl disable prometheus-node-exporter.service
Copied!

Configure Prometheus

Open prometheus.yml.
1
sudo nano /etc/prometheus/prometheus.yml
Copied!
Replace the contents of the file with.
Indentation must be correct YAML format or Prometheus will fail to start.
1
global:
2
scrape_interval: 15s # By default, scrape targets every 15 seconds.
3
โ€‹
4
# Attach these labels to any time series or alerts when communicating with
5
# external systems (federation, remote storage, Alertmanager).
6
external_labels:
7
monitor: 'codelab-monitor'
8
โ€‹
9
# A scrape configuration containing exactly one endpoint to scrape:
10
# Here it's Prometheus itself.
11
scrape_configs:
12
# The job name is added as a label job=<job_name> to any timeseries scraped from this config.
13
- job_name: 'Prometheus' # To scrape data from the cardano node
14
scrape_interval: 5s
15
static_configs:
16
# - targets: ['<CORE PRIVATE IP>:12798']
17
# labels:
18
# alias: 'C1'
19
# type: 'cardano-node'
20
# - targets: ['<RELAY PRIVATE IP>:12798']
21
# labels:
22
# alias: 'R1'
23
# type: 'cardano-node'
24
- targets: ['localhost:12798']
25
labels:
26
alias: 'N1'
27
type: 'cardano-node'
28
โ€‹
29
# - targets: ['<CORE PRIVATE IP>:9100']
30
# labels:
31
# alias: 'C1'
32
# type: 'node'
33
# - targets: ['<RELAY PRIVATE IP>:9100']
34
# labels:
35
# alias: 'R1'
36
# type: 'node'
37
- targets: ['localhost:9100']
38
labels:
39
alias: 'N1'
40
type: 'node'
Copied!
Save & exit.
Edit mainnet-config.json so cardano-node exports traces on all interfaces.
1
cd $NODE_FILES
2
sed -i ${NODE_CONFIG}-config.json -e "s/127.0.0.1/0.0.0.0/g"
Copied!

Install Grafana

GitHub - grafana/grafana: The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
GitHub
Add Grafana's gpg key to Ubuntu.
1
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
Copied!
Add latest stable repo to apt sources.
1
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
Copied!
Update your package lists & install Grafana.
1
sudo apt update
2
sudo apt install grafana
Copied!
Change the port Grafana listens on so it does not clash with cardano-node.
1
sudo sed -i /etc/grafana/grafana.ini \
2
-e "s/;http_port/http_port/" \
3
-e "s/3000/5000/"
Copied!

cardano-monitor bash function

Open .bashrc.
1
cd $HOME
2
nano .bashrc
Copied!
Down at the bottom add.
1
cardano-monitor() {
2
#do things with parameters like $1 such as
3
sudo systemctl "$1" prometheus.service
4
sudo systemctl "$1" prometheus-node-exporter.service
5
sudo systemctl "$1" grafana-server.service
6
}
Copied!
Save, exit & source.
1
source .bashrc
Copied!
Here we tied all three services under one function. Enable Prometheus.service, prometheus-node-exporter.service & grafana-server.service to run on boot and start the services.
1
cardano-monitor enable
2
cardano-monitor start
Copied!
At this point you may want to start cardano-service and get synced up before we continue to configure Grafana. Skip ahead to syncing the chain section. Choose whether you want to wait 30 hours or download my latest chain snapshot. Return here once gLiveView.sh shows you are at the tip of the chain.

Configure Grafana

On your local machine open your browser and got to [http://<Pi-Node's](http://<Pi-Node's) private ip>:5000
Do not change the default password yet, there is no encryption on the wire. Choose skip when it asks. The next time we visit Grafana it will be with a self signed TLS certificate handled by Nginx webservers proxy_pass and your passwords will be safe from anything listening on your internal network.
Log in and set a new password. Default username and password is admin:admin.

Configure data source

In the left hand vertical menu go to Configure > Datasources and click to Add data source. Choose Prometheus. Enter http://localhost:9090 where it is grayed out, everything can be left default. At the bottom save & test. You should get the green "Data source is working" if cardano-monitor has been started. If for some reason those services failed to start issue cardano-service restart.

Import dashboards

Save the dashboard json files to your local machine.
GitHub - armada-alliance/dashboards: Collection of Grafana Dashboards for cardano-node.
GitHub
In the left hand vertical menu go to Dashboards > Manage and click on Import. Select the file you just downloaded/created and save. Head back to Dashboards > Manage and click on your new dashboard.

Configure poolDataLive

Here you can use the poolData api to bring your pools data into Grafana.
PoolData.Live API
Follow the instructions to install the Grafana plugin, configure your datasource and import the dashboard.
Follow log output to journal.
1
sudo journalctl --unit=cardano-node --follow
Copied!
Follow log output to stdout.
1
sudo tail -f /var/log/syslog
Copied!

Grafana, Nginx proxy_pass & snakeoil

Let's put Grafana behind Nginx with self signed(snakeoil) certificate. The certificate was generated when we installed the ssl-cert package.
You will get a warning from your browser. This is because ca-certificates cannot follow a trust chain to a trusted (centralized) source. The connection is however encrypted and will protect your passwords flying around in plain text.
1
sudo nano /etc/nginx/sites-available/default
Copied!
Replace contents of the file with below.
1
# Default server configuration
2
#
3
server {
4
listen 80 default_server;
5
return 301 https://$host$request_uri;
6
}
7
โ€‹
8
server {
9
# SSL configuration
10
#
11
listen 443 ssl default_server;
12
#listen [::]:443 ssl default_server;
13
#
14
# Note: You should disable gzip for SSL traffic.
15
# See: https://bugs.debian.org/773332
16
#
17
# Read up on ssl_ciphers to ensure a secure configuration.
18
# See: https://bugs.debian.org/765782
19
#
20
# Self signed certs generated by the ssl-cert package
21
# Don't use them in a production server!
22
#
23
include snippets/snakeoil.conf;
24
โ€‹
25
add_header X-Proxy-Cache $upstream_cache_status;
26
location / {
27
proxy_pass http://127.0.0.1:5000;
28
proxy_redirect off;
29
include proxy_params;
30
}
31
}
Copied!
Check that Nginx is happy with our changes and restart it.
1
sudo nginx -t
2
## if ok do
3
sudo service nginx restart
Copied!
You can now visit your pi-nodes ip address without any port specification, the connection will be upgraded to SSL/TLS and you will get a scary message(not really scary at all). Continue through to your dashboard.
From here you have a pi-node with tools to build a stake pool from the following pages. Best of Luck and please join the armada-alliance, together we are stronger!
Last modified 26d ago