Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Development

Create Solana Validator RPC Only Node Part 2

This image has an empty alt attribute; its file name is image.png

If you haven’t checked out yet the first part, make sure to check it out on this link.

Setup the Solana Validator

Before you start, make sure you install the solana cli first which you can find the information on how to install the latest package here. Reboot your computer to make sure everything is going okay.

On this config we will connect directly as a validator (non-voting) for Solana mainnet-beta. First configure the Solana tools that you installed.

solana config set --url https://api.mainnet-beta.solana.com

Then run the sys-tuner for one time, this is to configure your computers internals to the recommended setup.

sudo $(command -v solana-sys-tuner) --user $(whoami) > sys-tuner.log 2>&1 &

This will run one-time, you need to run it after you reboot your system. There is also another way, on which you need to configure a systemd service. To configure a systemd service, create a file named solana-sys-tuner.service in the directory /etc/systemd/system.

sudo cat > /etc/systemd/system/solana-sys-tuner.service << EOF
[Unit]
Description=Solana System Tuner
After=network.target
Before=sol.service

[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/home/ubuntu/solana-sys-tuner.sh

[Install]
WantedBy=multi-user.target
WantedBy=sol.service
EOF

This will create the service and now you can run sudo systemctl enable --now solana-sys-tuner.service to enable it at boot and start the service right now. You can also do the manual way without having to run the solana-sys-tuner binary, just follow this tutorial from the official documentation here.

Also don’t forget to create the solana-sys-tuner.sh on your user home root directory.

cat > ~/solana-sys-tuner.sh << EOF
#!/usr/bin/env bash
set -ex

exec /home/ubuntu/.local/share/solana/install/active_release/bin/solana-sys-tuner --user ubuntu
EOF

Now you can now start the validator, to start prepare first a validator keypair.

solana-keygen new -o ~/validator-keypair.json

This will create a validator keypair at your user home directory. Don’t forget to save the output generated BIP39 seedphrase. DON’T FORGET. Once done, if you forgot your public key, you can view it using the command solana-keygen pubkey ~/validator-keypair.json. You will need the public key for later commands.

Set the validator keypair in your Solana cli tool:

solana config set --keypair ~/validator-keypair.json

That’s all for configuration, we can now start the validator. Create an simple shell script to contain the run parameters of the solana-validator command, so it will be easier to modify and adjust later on.

cat > validator.sh << EOF
#!/usr/bin/env bash

set -e

exec solana-validator \
    --no-voting \
    --identity ~/validator-keypair.json \
    --known-validator 7Np41oeYqPefeNQEHSv1UDhYrehxin3NStELsSKCT4K2 \
    --known-validator GdnSyH3YtwcxFvQrVVJMm1JhTS4QVX7MFsX56uJLUfiZ \
    --known-validator DE1bawNcRJB9rVm3buyMVfr8mBEoyyu73NBovf2oXJsJ \
    --known-validator CakcnaRDHka2gXyfbEd2d3xsvkJkqsLw2akB3zsN1D2S \
    --only-known-rpc \
    --ledger /mnt/disks/solana-ledger \
    --accounts /mnt/disks/solana-account \
    --rpc-port 8899 \
    --rpc-bind-address 0.0.0.0 \
    --dynamic-port-range 8000-8020 \
    --entrypoint entrypoint.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint2.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint3.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint4.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint5.mainnet-beta.solana.com:8001 \
    --expected-genesis-hash 5eykt4UsFv8P8NJdTREpY1vzqKqZKvdpKuc147dw2N9d \
    --wal-recovery-mode skip_any_corrupted_record \
    --limit-ledger-size \
    --no-port-check \
    --enable-rpc-transaction-history \
    --full-rpc-api \
    --log /mnt/disks/solana-spare/logs/solana-validator.log
EOF

This validator flags specify that RPC is open to public, and its only rpc-mode due to the --no-voting flag. The flags also specify that the RPC transaction history is enabled which will make the ledger disk be big, check the flag --enable-rpc-transaction-history. Read every flags from the solana-validator binary by executing the --help flag.

Wait for a while, it will download a very big snapshot so you could catch up at the latest transactions. The ledger will only contain all the latest transaction history of the Solana chain. This will take time depending on the speed of your machine and speed of network. Once you see there are no percentage or anything, then its good to go.

Make sure its on the list of validator nodes using the solana gossip command.

solana gossip | grep <pubkey>

That’s all, now you’re part of the validators. Another thing, in order to run it on reboot add the systemd service file, create the file using the command below on same directory as the solana-sys-tuner.service.

sudo cat > /etc/systemd/system/sol.service << EOF
[Unit]
Description=Solana Validator
After=network.target
Wants=solana-sys-tuner.service
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=ubuntu
LimitNOFILE=1000000
LogRateLimitIntervalSec=0
Environment="PATH=/bin:/usr/bin:/home/ubuntu/.local/share/solana/install/active_release/bin"
ExecStart=/home/ubuntu/validator.sh

[Install]
WantedBy=multi-user.target
EOF

Then enable it at boot using the command sudo systemctl enable --now sol.service. Make sure that it doesn’t have errors by checking the service status. Last thing to mention regarding logs as it can become large quickly, make sure to create a logrotate rule, the command below which I grabbed from the official documentation.

cat > logrotate.sol <<EOF
/home/sol/solana-validator.log {
  rotate 7
  daily
  missingok
  postrotate
    systemctl kill -s USR1 sol.service
  endscript
}
EOF

sudo cp logrotate.sol /etc/logrotate.d/sol
systemctl restart logrotate.service

That’s all, reboot and celebrate 🎉! Don’t forget to share and leave a comment if you like this kind of articles.

Create Solana Validator RPC Only Node Part 1

🛤 Validator RPC Node Disk Setup

The requirements for the Solana validator node can be found on the official Solana validator requirements page. Link below:

Solana Validator Requirements

In the current production setup, we use a n2-standard-64 machine which has 128GB of RAM64 Core8TB of Local SSD NVME. Make sure that its NVME as that is the requirements of the storage device for fast bootstrapping of large ledger data and accounts data. After provisioning the VM, we now start with the initial configuration of the VM.


👷‍♂️ Disk Setup

First, create a RAID0 device using the 24x 375GB Local SSD. Split the bought 24 Local SSD to three groups:

  • 12 – Transaction Ledger
  • 10 – Accounts
  • 2 – Logs and Spare Storage

To start creating the RAID0 devices on the transaction ledger you must execute this command:

sudo mdadm --create /dev/md0 --level=0 --raid-devices=12 \
  /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 \
  /dev/nvme0n5 /dev/nvme0n6 /dev/nvme0n7 /dev/nvme0n8 \
  /dev/nvme0n9 /dev/nvme0n10 /dev/nvme0n11 /dev/nvme0n12

As you can see on the command flags, we pass the raid devices as 12 because that’s how many NVME drives we are trying to format. On the device name we named it block device /dev/md0 with level 0 RAID setup. After finishing the setup we no go to setting up the the accounts storage:

sudo mdadm --create /dev/md1 --level=0 --raid-devices=10 \
  /dev/nvme0n13 /dev/nvme0n14 /dev/nvme0n15 /dev/nvme0n16 \
  /dev/nvme0n17 /dev/nvme0n18 /dev/nvme0n19 /dev/nvme0n20 \
  /dev/nvme0n21 /dev/nvme0n22

Same command as the upper command, but look closely on the device block name and the number of raid devices. Make sure if you are using custom number of NVME drives, modify the raid devices first before appending the block device name. Lastly, the spare storage:

sudo mdadm --create /dev/md2 --level=0 --raid-devices=2 \
  /dev/nvme0n23 /dev/nvme0n24

This is the last command to add the block device for the spare storage. We now move on to formatting the devices to our preferred storage filesystem type. On our setup we are in favor of using ext4 fs, as it provides better redundancy and journaling.

sudo mkfs.ext4 -F /dev/md0
sudo mkfs.ext4 -F /dev/md1
sudo mkfs.ext4 -F /dev/md2

When formatting the devices are done, we now move on mounting the devices to our servers. Create the mount points which would be stored on the /mnt/disks/.

sudo mkdir -p /mnt/disks/solana-{ledger,account,spare}

Check if the directory is okay by running the ls command on /mnt/disks/ directory. Also, make sure all the directories read, write and access permissions is okay by running the chmod on the created directories.

sudo chmod a+w /mnt/disks/solana-ledger
sudo chmod a+w /mnt/disks/solana-account
sudo chmod a+w /mnt/disks/solana-spare

The command above will ensure that the correct write permission is on the disk mount points. Then create auto mount on boot by editing the /etc/fstab:

echo UUID=`sudo blkid -s UUID -o value /dev/md0` /mnt/disks/solana-ledger ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo UUID=`sudo blkid -s UUID -o value /dev/md1` /mnt/disks/solana-account ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo UUID=`sudo blkid -s UUID -o value /dev/md2` /mnt/disks/solana-spare ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab

This will create a specific record that will be inserted on the /etc/fstab containing the UUID and mount options. Now everything’s done, reboot the server and check if all the mount points been mounted by running the mount command.

If everything’s good, then you’re good to go to the next tutorial. In case you have to change the RAID0 arrays, remember to run this command on the specific RAID array device to delete it sudo mdadm -S /dev/md0.

Checked the second part here.

New Crypto Malware Targets Browser Wallet Extensions

New malware that can corrupt crypto wallets and extensions has been discovered, putting investors at risk of being hacked.

A type of malware known as Mars Stealer — an upgraded version of information bootlegger Oski Stealer – has surfaced to prey on web browsers, crypto extensions, and crypto wallets, according to a new blog post by network security specialist 3xp0rt.

Internet Explorer, Firefox, Microsoft Edge, and Thunderbird are some of the most common online browsers that are affected by the infection.

It also targets wallets like Bitcoin Core and its derivatives, as well as crypto extensions like MetaMask, TronLink, Binance Chain Wallet, and Coinbase Wallet. MultiDoge and Ethereum wallets might also be harmed in the future.

The virus, according to 3xp0rt, only targets crypto extensions on browsers that use Chromium instead of Opera.

Mars Stealer, according to the cybersecurity expert, works by gaining access to a computer’s internal library files and performing a sophisticated sequence of technical code reconfigurations to carry out its tasks.

According to 3xp0rt:

Mars Stealer is an improved version of Oski Stealer. [It] has added [functionality]: anti-debug check, crypto extension stealing, but Outlook stealing is missing. The code has been refactored, but some algorithm remained stupid as in Oski Stealer.

The virus targets sensitive data saved in the wallet.dat file to steal a user’s wallet information. According to the internet security expert, the file contains information such as the address and private key access data. A grabber, loader, and self-removal function are also included in the virus.

Kintsugi Merge Testnet For Ethereum (ETH) Is Now Live

The Kintsugi testnet, the latest step in replacing Ethereum’s Proof-of-Work consensus method to Proof-of-Stake, has been deployed. The mainnet and beacon chains are expected to combine in Q1/Q2 of 2022. According to a release from ConsenSys, over 8.4 million ETH has been staked on Ethereum 2.0’s beacon chain.

Ethereum founding member Tim Beiko wrote in his announcement, “The Kintsugi testnet provides the community an opportunity to experiment with post-merge Ethereum and begin to identify any issues,”.

The Kintsugi testnet will help prepare for the “merge” to Ethereum’s 2.0. Following the merge, Ethereum 2.0 will move toward “Phase 2.” This will introduce sharding, a scalability feature that will improve fees and transaction times. Sharding is expected to arrive in late 2022.

Ethereum (ETH) Client Incentive Program To Reward Developers

Developers teams will get incentives in ETH that will be unlocked over time as part of the program. Each client team will receive 144 validators in total, or 4,608 ETH, worth about $17.8 million at current pricing. The program’s structure links teams with the network’s long-term health and guarantees that they are rewarded for developing secure software. The ETH 1 and ETH 2 Beacon chains will be merged sometime in the first quarter of 2022. Since the London update in early August, the network has burned 1.19 million ETH. At current pricing, this is worth over $4.6 billion, which corresponds to around 7,000 ETH (or $27 million) being burned per day.

The incentive program is setup to “ensure that client teams have a strong incentive to maintain the core Ethereum network over the long term.”. The teams eligible for the program include Besu, Erigon, Go Ethereum (Geth), Lighthouse, Lodestar, Nethermind, Nimbus, Prysm, and Teku.

The funds will be available immediately but withdrawals will be vested over several years.

Rebasing With Git

Rebasing is one of the features you probably want to have, if you plan to work on a neat git based project.


🍣 Where To Rebase?

If you know how many commits you make, to rebase you use git rebase with -i flag to enable interactive rebasing. The HEAD~<n> corresponds to the number of commits you have done (e.g. HEAD~4 if you have 4 commits to rollback to get to common ancestor commit).

git rebase -i HEAD~<n>

Sometimes, you commit a lot and forgot how many commits you’d make. To know the least common anscestor you have with master, you do git merge-base with your branch name as parameter.

git merge-base <your-branch> master

The above command will return a git hash which you can use on the git rebase command.

If you already know the git hash, then you can rollback to that specific commit and moving all current changes to unstaged. Once, the editor pop-ups you will choose which commit to retain, squash, and reword.

git rebase -i <git-ref-hash>

🍣 Merge Latest From Master

If you’ve already rebased your changes and needed to get lastest changes from master. All you have to do is rebase to the latest changes from master.
This command will do that.

git rebase origin/master

In any case, you’ve encountered some conflict first resolve it then continue in rebasing instead of creating new merge commit.

git rebase --continue

🍣 Overwriting Remote Repo Changes

Once all is done, overwrite your remote repo latest changes if you’ve pushed it. This will do a force push ignoring current ref on remote repo.

git push -f

🍣 Did Something Wrong? In Need Of Rollback

Did something wrong on merging conflicts? Don’t worry you can still see your previous changes using the command git reflog short for reference log.
You can checkout the reference hash then re-merge your changes.

git reflog

References

Limit Window Subsystem Linux v2 (WSL2) Resources To Speed Up Kubernetes

Window Subsystem Linux v2 (WSL2) is an iteration of the VM created by Microsoft, from Hyper-V to WSL and this the second generation of WSL. If it’s your first time accessing WSL2, it automatically provide you with the default setup which doesn’t provide any limits accessing your full workstation resources (CPU, RAM and other HDD). It means that if you have 8 cores cpu and 16Gb memory, it will use all that up. The problem with it is sometimes it affects your host computer and it gets slow. So to solve that problem we try to limit the resource consumption of WSL2.

Photo by Sadik Brika on Unsplash

Limit WSL Resource Consumption

On your profile directory %USERPROFILE% create a new file named .wslconfig. Set it’s content to the following:

[wsl2]
memory=8GB
processors=8

Change the settings base on your workstation capability, and this is what works for me.

Next, open up a powershell terminal in administrator mode and restart the LxssManager as this manages WSL2.

Get-Service LxssManager | Restart-Service

You could also use the wsl --shutdown method to restart WSL. Check if the vmmem process still consumes beyond its limit.

Troubleshoot

If the changes still not reflecting, try to restart your machine and also restart Docker Desktop.

Simple Rust Mutation Relationship Diagram

Rust mutation can be somewhat confusing if your a beginner. Its similar to C++ way of doing things on where to put the asterisk (*) and ampersand (&) sign in variable declaration. Moving the asterisk sign and ampersand sign makes the declaration sometimes more mutable and also can make it less mutable.

Here is a simple diagram on Rust mutation that I found on StackOverflow (SO). I can’t find the exact link to reference as this one is stored in my notes.


        a: &T == const T* const a;      // can't mutate either
    mut a: &T == const T* a;            // can't mutate what is pointed to
    a: &mut T == T* const a;            // can't mutate pointer
mut a: &mut T == T* a;                  // can mutate both

Converting Rust String To And From

Rust &str and String is different in a sense that str is static, owned and fix sized while String can be dynamically allocated once and be converted to mutable to be appended. Most of the time you’ll be working with String on Rust when re-allocating and moving values between structs.


There are times you may need to convert dynamic string to char bytes and static string. Here are ways to do it:

From &str

  • &str -> String has many equally valid methods: String::from(st), st.to_string(), st.to_owned().
    • But I suggest you stick with one of them within a single project. The major advantage of String::from is that you can use it as an argument to a map method. So instead of x.map(|s| String::from(s)) you can often use x.map(String::from).
  • &str -> &[u8] is done by st.as_bytes()
  • &str -> Vec<u8> is a combination of &str -> &[u8] -> Vec<u8>, i.e. st.as_bytes().to_vec() or st.as_bytes().to_owned()

From String

  • String -> &str should just be &s where coercion is available or s.as_str() where it is not.
  • String -> &[u8] is the same as &str -> &[u8]: s.as_bytes()
  • String -> Vec<u8> has a custom method: s.into_bytes()

From &[u8]

  • &[u8] -> Vec<u8> is done by u.to_owned() or u.to_vec(). They do the same thing, but to_vec has the slight advantage of being unambiguous about the type it returns.
  • &[u8] -> &str doesn’t actually exist, that would be &[u8] -> Result<&str, Error>, provided via str::from_utf8(u)
  • &[u8] -> String is the combination of &[u8] -> Result<&str, Error> -> Result<String, Error>

From Vec<u8>

  • Vec<u8> -> &[u8] should be just &v where coercion is available, or as_slice where it’s not.
  • Vec<u8> -> &str is the same as Vec<u8> -> &[u8] -> Result<&str, Error> i.e. str::from_utf8(&v)
  • Vec<u8> -> String doesn’t actually exist, that would be Vec<u8> -> Result<String, Error> via String::from_utf8(v)

Coercion is available whenever the target is not generic but explicitly typed as &str or &[u8], respectively. The Rustonomicon has a chapter on coercions with more details about coercion sites.


tl;dr

&str    -> String  | String::from(s) or s.to_string() or s.to_owned()
&str    -> &[u8]   | s.as_bytes()
&str    -> Vec<u8> | s.as_bytes().to_vec() or s.as_bytes().to_owned()
String  -> &str    | &s if possible* else s.as_str()
String  -> &[u8]   | s.as_bytes()
String  -> Vec<u8> | s.into_bytes()
&[u8]   -> &str    | s.to_vec() or s.to_owned()
&[u8]   -> String  | std::str::from_utf8(s).unwrap(), but don't**
&[u8]   -> Vec<u8> | String::from_utf8(s).unwrap(), but don't**
Vec<u8> -> &str    | &s if possible* else s.as_slice()
Vec<u8> -> String  | std::str::from_utf8(&s).unwrap(), but don't**
Vec<u8> -> &[u8]   | String::from_utf8(s).unwrap(), but don't**

* target should have explicit type (i.e., checker can't infer that)

** handle the error properly instead

Move Docker Desktop Data to Another Location (WSL 2)

In Docker Desktop for Windows the WSL2 version, you don’t usually have options to increase memory and diskspace as it will be managed directly by Windows.


The Docker Desktop data can be found originally in this location %USERPROFILE%\AppData\Local\Docker\wsl\data.

🚚 Export Docker Data

In order to make this work, first shutdown Docker Desktop. This can be done by right-clicking the system tray icon of Docker then from the context menu Quit Docker Destop.

Next is open your command prompt and type the following:

wsl --list -v

On which, when run will return to you the state of all WSL images.

  NAME                   STATE           VERSION
* docker-desktop         Stopped         2
  docker-desktop-data    Stopped         2

After that we export the docker-desktop-data into a tar archive. We will assume you are planning to move the docker data into D: drive, and within the drive you have already created a folder named Docker.

wsl --export docker-desktop-data "D:\docker-desktop-data.tar"

Next, is to unregister docker-desktop-data from WSL.
This command below will delete ext4.vhdx from %USERPROFILE%\AppData\Local\Docker\wsl\data\ext4.vhdx, so make sure you back it up first.

wsl --unregister docker-desktop-data

🚛 Import Docker Data

After export, we do import docker-desktop-data back to WSL.

wsl --import docker-desktop-data "D:\Docker" "D:\docker-desktop-data.tar" --version 2

The ext4.vhdx will now reside in the D:\Docker folder. Start Docker Desktop and verify the changes.

If everything works out, you can now delete the tar archive you created earlier D:\docker-desktop-data.tar. Please don’t delete the ext4.vhdx, otherwise you would lose all your images and containers in docker.

In case docker icon turns red in Docker Desktop, clear the docker cache which can be found in Docker Desktop settings.