THORChain Docs
DEVSNODES
  • Introduction
  • Using THORChain
    • Roles
      • Liquidity Providers (LPs)
      • Swappers
      • Arbitrageurs
      • Node Operators
    • RUNE
  • How It Works
    • Fees
    • Governance
    • Security
    • Incentive Pendulum
    • Emission Schedule
    • Constants and Mimir
    • THORChain Name Service
  • Ecosystem
  • Technology
    • Bifrost, TSS and Vaults
    • Midgard
    • Cosmos SDK
    • CosmWasm
    • IBC
    • THORChain & Cosmos
  • Frequently Asked Questions
    • Node Operators
    • Liquidity Providers
    • Asset Types
    • Savers
    • Lending
    • RUNEPool
  • THORChain Finance
    • Liquidity
    • Trade Assets
    • Secured Assets
    • TOR
    • RUNEPool
    • Synthetics
    • Savers
    • Lending
  • THORNodes
    • THORNode Overview
      • Node Operations
      • THORNode Stack
      • Risks, Costs and Rewards
    • Cluster Launcher
      • Setup - Linode
      • Setup - Azure
      • Setup - Hetzner Bare Metal
      • Setup - Google Cloud
      • Setup - HCloud
      • Setup - Digital Ocean
      • Setup - AWS
    • Deploying
    • Joining
    • Managing
    • Pooled THORNodes
    • Fullnode
      • Thornode - Kubernetes
      • Thornode - Linux
      • Thornode - Docker
      • Midgard - Linux
      • Midgard - Docker
      • Proxy Setup
    • Alerting
    • Leaving
    • 🛑Emergency Procedures
    • ✔️CHECKLIST
    • Multi-node Deployment
    • Developing
  • Website
  • Community Discord
  • Community Telegram
  • Developer Discord
Powered by GitBook
On this page
  • Preparations
  • Servers
  • vSwitch
  • Usage
  • Provisioning
  • THORChain
  • Resetting the bare metal servers
  • Manually
  • Automatically
  • Instantiation
Export as PDF
  1. THORNodes
  2. Cluster Launcher

Setup - Hetzner Bare Metal

Setting up a Kubernetes Cluster with Hetzner Dedicated Servers

PreviousSetup - AzureNextSetup - Google Cloud

Last updated 3 years ago

This guide for Hetzner Bare Metal is WIP and not currently recommended. Proceed with caution until an update is released and this warning removed.

Checkout the repository to manage a cluster of dedicated servers on Hetzner.

The scripts in this repository will setup and maintain one or more clusters consisting of dedicated servers. Each cluster will also be provisioned to operate as a node in the network.

Executing the scripts in combination with some manual procedures will get you highly available, secure clusters with the following features on bare metal.

  • (based)

  • Internal NVMe storage (/)

  • Virtual LAN (also over multiple locations) ()

  • Load Balancing ()

Preparations

Servers

Acquire a couple of as the basis for a cluster (AX41-NVME's are working well for instance). Visit the and name the servers appropriately.

tc-k8s-node1
tc-k8s-node2
tc-k8s-node3
...

tc-k8s-master1
tc-k8s-master2
tc-k8s-worker1
tc-k8s-worker2
tc-k8s-worker3
...

vSwitch

Usage

Clone this repository, cd into it and download kubespray.

git submodule init && git submodule update

Create a Python virtual environment or similar.

# Optional
virtualenv -p python3 venv

Install dependencies required by Python and Ansible Glaxy.

pip install -r requirements.python.txt
ansible-galaxy install -r requirements.ansible.yml

Provisioning

Create a deployment environment inventory file for each cluster you want to manage.

cp hosts.example inventory/production.yml
cp hosts.example inventory/test.yml
cp hosts.example inventory/environment.yml
...

cp hosts.example inventory/production-01.yml
cp hosts.example inventory/production-02.yml
...

cp hosts.example inventory/production-helsinki.yml
cp hosts.example inventory/whatever.yml

Edit the inventory file with your server ip's and network information and customize everything to your needs.

# Manage a cluster
ansible-playbook cluster.init.yml -i inventory/environment.yml
ansible-playbook --become --become-user=root kubespray/cluster.yml -i inventory/environment.yml
ansible-playbook cluster.finish.yml -i inventory/environment.yml

# Run custom playbooks
ansible-playbook private-cluster.yml -i inventory/environment.yml
ansible-playbook private-test-cluster.yml -i inventory/environment.yml
ansible-playbook private-whatever-cluster.yml -i inventory/environment.yml

THORChain

Resetting the bare metal servers

This will install and use Ubuntu 20.04 on only one of the two internal NVMe drives. The unused ones will be used for persistent storage with ceph/rook. You can check the internal drive setup with lsblk. Change it accordingly in the command shown above when necessary.

Manually

installimage -a -r no -i images/Ubuntu-2004-focal-64-minimal.tar.gz -p /:ext4:all -d nvme0n1 -f yes -t yes -n hostname

Automatically

Create a pristine state by running the playbooks in sequence.

ansible-playbook server.rescue.yml -i inventory/environment.yml
ansible-playbook server.bootstrap.yml -i inventory/environment.yml

Instantiation

Instantiate the servers.

ansible-playbook server.instantiate.yml -i inventory/environment.yml

Refer to the to properly initialize them.

Create a and order an appropriate subnet (it may take a while to show up after the order). Give the vSwitch a name (i.e. tc-k8s-net) and assign this vSwitch to the servers.

Checkout the for help.

Note: does not work with ansible collections and the strategy must be changed (i.e. strategy: linear).

Check out for more playbooks on cluster management.

In order for the cluster to operate as a node in the THORCHain network deploy as instructed . You can also refer to the , if necessary, or the THORChain as a whole.

Visit the and put each server of the cluster into rescue mode. Then execute the following script.

source
kubernetes
Hetzner
THORCHain
Kubespray
Ceph
Rook
Calico
MetalLB
servers
admin panel
reset procedure
vSwitch
docs
Mitogen
this
here
node-launcher repository
documentation
console