Create Solana Validator RPC Only Node Part 1
š¤ Validator RPC Node Disk Setup
The requirements for the Solana validator node can be found on the official Solana validator requirements page. Link below:
In the current production setup, we use a n2-standard-64
machine which has 128GB of RAM, 64 Core, 8TB of Local SSD NVME. Make sure that its NVME as that is the requirements of the storage device for fast bootstrapping of large ledger data and accounts data. After provisioning the VM, we now start with the initial configuration of the VM.
š·āāļø Disk Setup
First, create a RAID0 device using the 24x 375GB Local SSD. Split the bought 24 Local SSD to three groups:
- 12 – Transaction Ledger
- 10 – Accounts
- 2 – Logs and Spare Storage
To start creating the RAID0 devices on the transaction ledger you must execute this command:
sudo mdadm --create /dev/md0 --level=0 --raid-devices=12 \ /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 \ /dev/nvme0n5 /dev/nvme0n6 /dev/nvme0n7 /dev/nvme0n8 \ /dev/nvme0n9 /dev/nvme0n10 /dev/nvme0n11 /dev/nvme0n12
As you can see on the command flags, we pass the raid devices as 12 because that’s how many NVME drives we are trying to format. On the device name we named it block device /dev/md0
with level 0 RAID setup. After finishing the setup we no go to setting up the the accounts storage:
sudo mdadm --create /dev/md1 --level=0 --raid-devices=10 \ /dev/nvme0n13 /dev/nvme0n14 /dev/nvme0n15 /dev/nvme0n16 \ /dev/nvme0n17 /dev/nvme0n18 /dev/nvme0n19 /dev/nvme0n20 \ /dev/nvme0n21 /dev/nvme0n22
Same command as the upper command, but look closely on the device block name and the number of raid devices. Make sure if you are using custom number of NVME drives, modify the raid devices first before appending the block device name. Lastly, the spare storage:
sudo mdadm --create /dev/md2 --level=0 --raid-devices=2 \ /dev/nvme0n23 /dev/nvme0n24
This is the last command to add the block device for the spare storage. We now move on to formatting the devices to our preferred storage filesystem type. On our setup we are in favor of using ext4 fs, as it provides better redundancy and journaling.
sudo mkfs.ext4 -F /dev/md0 sudo mkfs.ext4 -F /dev/md1 sudo mkfs.ext4 -F /dev/md2
When formatting the devices are done, we now move on mounting the devices to our servers. Create the mount points which would be stored on the /mnt/disks/
.
sudo mkdir -p /mnt/disks/solana-{ledger,account,spare}
Check if the directory is okay by running the ls
command on /mnt/disks/
directory. Also, make sure all the directories read, write and access permissions is okay by running the chmod
on the created directories.
sudo chmod a+w /mnt/disks/solana-ledger sudo chmod a+w /mnt/disks/solana-account sudo chmod a+w /mnt/disks/solana-spare
The command above will ensure that the correct write permission is on the disk mount points. Then create auto mount on boot by editing the /etc/fstab
:
echo UUID=`sudo blkid -s UUID -o value /dev/md0` /mnt/disks/solana-ledger ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab echo UUID=`sudo blkid -s UUID -o value /dev/md1` /mnt/disks/solana-account ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab echo UUID=`sudo blkid -s UUID -o value /dev/md2` /mnt/disks/solana-spare ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
This will create a specific record that will be inserted on the /etc/fstab
containing the UUID and mount options. Now everything’s done, reboot the server and check if all the mount points been mounted by running the mount
command.
If everything’s good, then you’re good to go to the next tutorial. In case you have to change the RAID0 arrays, remember to run this command on the specific RAID array device to delete it sudo mdadm -S /dev/md0
.
Checked the second part here.