Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Software Development

Create Solana Validator RPC Only Node Part 2

This image has an empty alt attribute; its file name is image.png

If you haven’t checked out yet the first part, make sure to check it out on this link.

Setup the Solana Validator

Before you start, make sure you install the solana cli first which you can find the information on how to install the latest package here. Reboot your computer to make sure everything is going okay.

On this config we will connect directly as a validator (non-voting) for Solana mainnet-beta. First configure the Solana tools that you installed.

solana config set --url https://api.mainnet-beta.solana.com

Then run the sys-tuner for one time, this is to configure your computers internals to the recommended setup.

sudo $(command -v solana-sys-tuner) --user $(whoami) > sys-tuner.log 2>&1 &

This will run one-time, you need to run it after you reboot your system. There is also another way, on which you need to configure a systemd service. To configure a systemd service, create a file named solana-sys-tuner.service in the directory /etc/systemd/system.

sudo cat > /etc/systemd/system/solana-sys-tuner.service << EOF
[Unit]
Description=Solana System Tuner
After=network.target
Before=sol.service

[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/home/ubuntu/solana-sys-tuner.sh

[Install]
WantedBy=multi-user.target
WantedBy=sol.service
EOF

This will create the service and now you can run sudo systemctl enable --now solana-sys-tuner.service to enable it at boot and start the service right now. You can also do the manual way without having to run the solana-sys-tuner binary, just follow this tutorial from the official documentation here.

Also don’t forget to create the solana-sys-tuner.sh on your user home root directory.

cat > ~/solana-sys-tuner.sh << EOF
#!/usr/bin/env bash
set -ex

exec /home/ubuntu/.local/share/solana/install/active_release/bin/solana-sys-tuner --user ubuntu
EOF

Now you can now start the validator, to start prepare first a validator keypair.

solana-keygen new -o ~/validator-keypair.json

This will create a validator keypair at your user home directory. Don’t forget to save the output generated BIP39 seedphrase. DON’T FORGET. Once done, if you forgot your public key, you can view it using the command solana-keygen pubkey ~/validator-keypair.json. You will need the public key for later commands.

Set the validator keypair in your Solana cli tool:

solana config set --keypair ~/validator-keypair.json

That’s all for configuration, we can now start the validator. Create an simple shell script to contain the run parameters of the solana-validator command, so it will be easier to modify and adjust later on.

cat > validator.sh << EOF
#!/usr/bin/env bash

set -e

exec solana-validator \
    --no-voting \
    --identity ~/validator-keypair.json \
    --known-validator 7Np41oeYqPefeNQEHSv1UDhYrehxin3NStELsSKCT4K2 \
    --known-validator GdnSyH3YtwcxFvQrVVJMm1JhTS4QVX7MFsX56uJLUfiZ \
    --known-validator DE1bawNcRJB9rVm3buyMVfr8mBEoyyu73NBovf2oXJsJ \
    --known-validator CakcnaRDHka2gXyfbEd2d3xsvkJkqsLw2akB3zsN1D2S \
    --only-known-rpc \
    --ledger /mnt/disks/solana-ledger \
    --accounts /mnt/disks/solana-account \
    --rpc-port 8899 \
    --rpc-bind-address 0.0.0.0 \
    --dynamic-port-range 8000-8020 \
    --entrypoint entrypoint.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint2.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint3.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint4.mainnet-beta.solana.com:8001 \
    --entrypoint entrypoint5.mainnet-beta.solana.com:8001 \
    --expected-genesis-hash 5eykt4UsFv8P8NJdTREpY1vzqKqZKvdpKuc147dw2N9d \
    --wal-recovery-mode skip_any_corrupted_record \
    --limit-ledger-size \
    --no-port-check \
    --enable-rpc-transaction-history \
    --full-rpc-api \
    --log /mnt/disks/solana-spare/logs/solana-validator.log
EOF

This validator flags specify that RPC is open to public, and its only rpc-mode due to the --no-voting flag. The flags also specify that the RPC transaction history is enabled which will make the ledger disk be big, check the flag --enable-rpc-transaction-history. Read every flags from the solana-validator binary by executing the --help flag.

Wait for a while, it will download a very big snapshot so you could catch up at the latest transactions. The ledger will only contain all the latest transaction history of the Solana chain. This will take time depending on the speed of your machine and speed of network. Once you see there are no percentage or anything, then its good to go.

Make sure its on the list of validator nodes using the solana gossip command.

solana gossip | grep <pubkey>

That’s all, now you’re part of the validators. Another thing, in order to run it on reboot add the systemd service file, create the file using the command below on same directory as the solana-sys-tuner.service.

sudo cat > /etc/systemd/system/sol.service << EOF
[Unit]
Description=Solana Validator
After=network.target
Wants=solana-sys-tuner.service
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=ubuntu
LimitNOFILE=1000000
LogRateLimitIntervalSec=0
Environment="PATH=/bin:/usr/bin:/home/ubuntu/.local/share/solana/install/active_release/bin"
ExecStart=/home/ubuntu/validator.sh

[Install]
WantedBy=multi-user.target
EOF

Then enable it at boot using the command sudo systemctl enable --now sol.service. Make sure that it doesn’t have errors by checking the service status. Last thing to mention regarding logs as it can become large quickly, make sure to create a logrotate rule, the command below which I grabbed from the official documentation.

cat > logrotate.sol <<EOF
/home/sol/solana-validator.log {
  rotate 7
  daily
  missingok
  postrotate
    systemctl kill -s USR1 sol.service
  endscript
}
EOF

sudo cp logrotate.sol /etc/logrotate.d/sol
systemctl restart logrotate.service

That’s all, reboot and celebrate 🎉! Don’t forget to share and leave a comment if you like this kind of articles.

Create Solana Validator RPC Only Node Part 1

🛤 Validator RPC Node Disk Setup

The requirements for the Solana validator node can be found on the official Solana validator requirements page. Link below:

Solana Validator Requirements

In the current production setup, we use a n2-standard-64 machine which has 128GB of RAM64 Core8TB of Local SSD NVME. Make sure that its NVME as that is the requirements of the storage device for fast bootstrapping of large ledger data and accounts data. After provisioning the VM, we now start with the initial configuration of the VM.


👷‍♂️ Disk Setup

First, create a RAID0 device using the 24x 375GB Local SSD. Split the bought 24 Local SSD to three groups:

  • 12 – Transaction Ledger
  • 10 – Accounts
  • 2 – Logs and Spare Storage

To start creating the RAID0 devices on the transaction ledger you must execute this command:

sudo mdadm --create /dev/md0 --level=0 --raid-devices=12 \
  /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 \
  /dev/nvme0n5 /dev/nvme0n6 /dev/nvme0n7 /dev/nvme0n8 \
  /dev/nvme0n9 /dev/nvme0n10 /dev/nvme0n11 /dev/nvme0n12

As you can see on the command flags, we pass the raid devices as 12 because that’s how many NVME drives we are trying to format. On the device name we named it block device /dev/md0 with level 0 RAID setup. After finishing the setup we no go to setting up the the accounts storage:

sudo mdadm --create /dev/md1 --level=0 --raid-devices=10 \
  /dev/nvme0n13 /dev/nvme0n14 /dev/nvme0n15 /dev/nvme0n16 \
  /dev/nvme0n17 /dev/nvme0n18 /dev/nvme0n19 /dev/nvme0n20 \
  /dev/nvme0n21 /dev/nvme0n22

Same command as the upper command, but look closely on the device block name and the number of raid devices. Make sure if you are using custom number of NVME drives, modify the raid devices first before appending the block device name. Lastly, the spare storage:

sudo mdadm --create /dev/md2 --level=0 --raid-devices=2 \
  /dev/nvme0n23 /dev/nvme0n24

This is the last command to add the block device for the spare storage. We now move on to formatting the devices to our preferred storage filesystem type. On our setup we are in favor of using ext4 fs, as it provides better redundancy and journaling.

sudo mkfs.ext4 -F /dev/md0
sudo mkfs.ext4 -F /dev/md1
sudo mkfs.ext4 -F /dev/md2

When formatting the devices are done, we now move on mounting the devices to our servers. Create the mount points which would be stored on the /mnt/disks/.

sudo mkdir -p /mnt/disks/solana-{ledger,account,spare}

Check if the directory is okay by running the ls command on /mnt/disks/ directory. Also, make sure all the directories read, write and access permissions is okay by running the chmod on the created directories.

sudo chmod a+w /mnt/disks/solana-ledger
sudo chmod a+w /mnt/disks/solana-account
sudo chmod a+w /mnt/disks/solana-spare

The command above will ensure that the correct write permission is on the disk mount points. Then create auto mount on boot by editing the /etc/fstab:

echo UUID=`sudo blkid -s UUID -o value /dev/md0` /mnt/disks/solana-ledger ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo UUID=`sudo blkid -s UUID -o value /dev/md1` /mnt/disks/solana-account ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo UUID=`sudo blkid -s UUID -o value /dev/md2` /mnt/disks/solana-spare ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab

This will create a specific record that will be inserted on the /etc/fstab containing the UUID and mount options. Now everything’s done, reboot the server and check if all the mount points been mounted by running the mount command.

If everything’s good, then you’re good to go to the next tutorial. In case you have to change the RAID0 arrays, remember to run this command on the specific RAID array device to delete it sudo mdadm -S /dev/md0.

Checked the second part here.

New Crypto Malware Targets Browser Wallet Extensions

New malware that can corrupt crypto wallets and extensions has been discovered, putting investors at risk of being hacked.

A type of malware known as Mars Stealer — an upgraded version of information bootlegger Oski Stealer – has surfaced to prey on web browsers, crypto extensions, and crypto wallets, according to a new blog post by network security specialist 3xp0rt.

Internet Explorer, Firefox, Microsoft Edge, and Thunderbird are some of the most common online browsers that are affected by the infection.

It also targets wallets like Bitcoin Core and its derivatives, as well as crypto extensions like MetaMask, TronLink, Binance Chain Wallet, and Coinbase Wallet. MultiDoge and Ethereum wallets might also be harmed in the future.

The virus, according to 3xp0rt, only targets crypto extensions on browsers that use Chromium instead of Opera.

Mars Stealer, according to the cybersecurity expert, works by gaining access to a computer’s internal library files and performing a sophisticated sequence of technical code reconfigurations to carry out its tasks.

According to 3xp0rt:

Mars Stealer is an improved version of Oski Stealer. [It] has added [functionality]: anti-debug check, crypto extension stealing, but Outlook stealing is missing. The code has been refactored, but some algorithm remained stupid as in Oski Stealer.

The virus targets sensitive data saved in the wallet.dat file to steal a user’s wallet information. According to the internet security expert, the file contains information such as the address and private key access data. A grabber, loader, and self-removal function are also included in the virus.

Kintsugi Merge Testnet For Ethereum (ETH) Is Now Live

The Kintsugi testnet, the latest step in replacing Ethereum’s Proof-of-Work consensus method to Proof-of-Stake, has been deployed. The mainnet and beacon chains are expected to combine in Q1/Q2 of 2022. According to a release from ConsenSys, over 8.4 million ETH has been staked on Ethereum 2.0’s beacon chain.

Ethereum founding member Tim Beiko wrote in his announcement, “The Kintsugi testnet provides the community an opportunity to experiment with post-merge Ethereum and begin to identify any issues,”.

The Kintsugi testnet will help prepare for the “merge” to Ethereum’s 2.0. Following the merge, Ethereum 2.0 will move toward “Phase 2.” This will introduce sharding, a scalability feature that will improve fees and transaction times. Sharding is expected to arrive in late 2022.

Rebasing With Git

Rebasing is one of the features you probably want to have, if you plan to work on a neat git based project.


🍣 Where To Rebase?

If you know how many commits you make, to rebase you use git rebase with -i flag to enable interactive rebasing. The HEAD~<n> corresponds to the number of commits you have done (e.g. HEAD~4 if you have 4 commits to rollback to get to common ancestor commit).

git rebase -i HEAD~<n>

Sometimes, you commit a lot and forgot how many commits you’d make. To know the least common anscestor you have with master, you do git merge-base with your branch name as parameter.

git merge-base <your-branch> master

The above command will return a git hash which you can use on the git rebase command.

If you already know the git hash, then you can rollback to that specific commit and moving all current changes to unstaged. Once, the editor pop-ups you will choose which commit to retain, squash, and reword.

git rebase -i <git-ref-hash>

🍣 Merge Latest From Master

If you’ve already rebased your changes and needed to get lastest changes from master. All you have to do is rebase to the latest changes from master.
This command will do that.

git rebase origin/master

In any case, you’ve encountered some conflict first resolve it then continue in rebasing instead of creating new merge commit.

git rebase --continue

🍣 Overwriting Remote Repo Changes

Once all is done, overwrite your remote repo latest changes if you’ve pushed it. This will do a force push ignoring current ref on remote repo.

git push -f

🍣 Did Something Wrong? In Need Of Rollback

Did something wrong on merging conflicts? Don’t worry you can still see your previous changes using the command git reflog short for reference log.
You can checkout the reference hash then re-merge your changes.

git reflog

References

Simple Rust Mutation Relationship Diagram

Rust mutation can be somewhat confusing if your a beginner. Its similar to C++ way of doing things on where to put the asterisk (*) and ampersand (&) sign in variable declaration. Moving the asterisk sign and ampersand sign makes the declaration sometimes more mutable and also can make it less mutable.

Here is a simple diagram on Rust mutation that I found on StackOverflow (SO). I can’t find the exact link to reference as this one is stored in my notes.


        a: &T == const T* const a;      // can't mutate either
    mut a: &T == const T* a;            // can't mutate what is pointed to
    a: &mut T == T* const a;            // can't mutate pointer
mut a: &mut T == T* a;                  // can mutate both

Converting Rust String To And From

Rust &str and String is different in a sense that str is static, owned and fix sized while String can be dynamically allocated once and be converted to mutable to be appended. Most of the time you’ll be working with String on Rust when re-allocating and moving values between structs.


There are times you may need to convert dynamic string to char bytes and static string. Here are ways to do it:

From &str

  • &str -> String has many equally valid methods: String::from(st), st.to_string(), st.to_owned().
    • But I suggest you stick with one of them within a single project. The major advantage of String::from is that you can use it as an argument to a map method. So instead of x.map(|s| String::from(s)) you can often use x.map(String::from).
  • &str -> &[u8] is done by st.as_bytes()
  • &str -> Vec<u8> is a combination of &str -> &[u8] -> Vec<u8>, i.e. st.as_bytes().to_vec() or st.as_bytes().to_owned()

From String

  • String -> &str should just be &s where coercion is available or s.as_str() where it is not.
  • String -> &[u8] is the same as &str -> &[u8]: s.as_bytes()
  • String -> Vec<u8> has a custom method: s.into_bytes()

From &[u8]

  • &[u8] -> Vec<u8> is done by u.to_owned() or u.to_vec(). They do the same thing, but to_vec has the slight advantage of being unambiguous about the type it returns.
  • &[u8] -> &str doesn’t actually exist, that would be &[u8] -> Result<&str, Error>, provided via str::from_utf8(u)
  • &[u8] -> String is the combination of &[u8] -> Result<&str, Error> -> Result<String, Error>

From Vec<u8>

  • Vec<u8> -> &[u8] should be just &v where coercion is available, or as_slice where it’s not.
  • Vec<u8> -> &str is the same as Vec<u8> -> &[u8] -> Result<&str, Error> i.e. str::from_utf8(&v)
  • Vec<u8> -> String doesn’t actually exist, that would be Vec<u8> -> Result<String, Error> via String::from_utf8(v)

Coercion is available whenever the target is not generic but explicitly typed as &str or &[u8], respectively. The Rustonomicon has a chapter on coercions with more details about coercion sites.


tl;dr

&str    -> String  | String::from(s) or s.to_string() or s.to_owned()
&str    -> &[u8]   | s.as_bytes()
&str    -> Vec<u8> | s.as_bytes().to_vec() or s.as_bytes().to_owned()
String  -> &str    | &s if possible* else s.as_str()
String  -> &[u8]   | s.as_bytes()
String  -> Vec<u8> | s.into_bytes()
&[u8]   -> &str    | s.to_vec() or s.to_owned()
&[u8]   -> String  | std::str::from_utf8(s).unwrap(), but don't**
&[u8]   -> Vec<u8> | String::from_utf8(s).unwrap(), but don't**
Vec<u8> -> &str    | &s if possible* else s.as_slice()
Vec<u8> -> String  | std::str::from_utf8(&s).unwrap(), but don't**
Vec<u8> -> &[u8]   | String::from_utf8(s).unwrap(), but don't**

* target should have explicit type (i.e., checker can't infer that)

** handle the error properly instead

Install Istio to Docker Desktop (WSL 2)

Istio is an open source service mesh. The documentation for installing it on windows is a bit vague, and the website only provides documentation for nix based systems.

Install Istio

Download the istioctl.exe executable from the official istio releases page and extract then place it in a folder your choice. Next will be adding it to your environment variable.

Adding the to environment variable will be straight-forward, add the directory location of the istioctl to PATH.

Install the istio namespace and services in kubernetes.

istioctl install --set profile=default -y

Also set istio to automatically inject Envoy sidecar proxies when deploying applications and services to default namespace.

kubectl label namespace default istio-injection=enabled

Troubleshoot

Determine if Kubernetes cluster is running in an environment that supports external load balancer.

kubectl get svc istio-ingressgateway -n istio-system

Check if there are any problems presented in analyze.

istioctl analyze

Also, check the endpoint if its empty or returning any headers and data.

curl -H curl -s -I -HHost:keycloak.7f000101.nip.io http://127.0.0.1

  1. Service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between services or microservices, using a proxy.
  2. Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
  3. Istio is a service mesh—a modernized service networking layer that provides a transparent and language-independent way to flexibly and easily automate application network functions.

Dirty Logging With Serilog For ASP.NET 5

Here we are again on yet-another tutorial on using Serilog1 as your primary logging framework for both development as well as production.

If you look on the internet you’ll see there are many ways to integrate Serilog in your existing application, on this tutorial I’ll show you what I’ve been using for all the projects that I’ve handled.

You may ask what is the difference of my setup compared to others?


Programming isn’t about what you know; it’s about what you can figure out.

— Chris Pine.

The setup I’ll show you allows easy addition of new sinks and on the fly modification of log format without the need of recompiling your app again.

So how does that work?

Follow me and let’s dive on how to implement it. 👆

Prerequisites

First of all, you must have a .NET 5.0 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or Linux with proper environment set.

If you are on Windows 10 and already have a Visual Studio2 2019, just update it to the most recent version, that way would ensure your system to have the latest .NET Core SDK3 version.

So where do we start?

First, let’s create a test bed project. Type the command below on an existing shell console (e.g. bash, power shell, cmd, .etc).

dotnet new web -f net5.0 --no-https --name SerilogDemo

The command above will create a new project that will use .NET 5 framework. And from the above flags --no-https will setup the project to use non-secured (non-SSL) empty web API (it means will not generate a cert and an HTTPS URL).

After the project creation, change directory to the project root. And install the following nuget dependencies.

dotnet add package Serilog
dotnet add package Serilog.AspNetCore
dotnet add package Serilog.Extensions.Hosting
dotnet add package Serilog.Extensions.Logging
dotnet add package Serilog.Settings.Configuration
dotnet add package Serilog.Sinks.Console
dotnet add package Serilog.Sinks.File

Then we will now dive to C# implementation, first import the Serilog library on Program.cs.

using Serilog;

After that, still on the Program.cs file go to the method CreateHostBuilder and add /or chain the UseSerilog method to our application startup.

webBuilder.UseStartup<Startup>().UseSerilog();

The UseSerilog call will initialize the Serilog instance. Then we move to Serilog hooks to our main thread. On the Main method convert it first from void to int, the reason for this is to be able to return different numeric exit code when an error happens.

Then we initialize the appsettings configuration early on, normally this will be initialized on startup. The reason we initialize this first, is we will use the appsettings to provide configuration to Serilog logger instance that will be called on the this Main method.

IConfigurationRoot configuration = new ConfigurationBuilder()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
    .Build();

Next on, initialize the logger instance and handle further exceptions coming from the host builder creation. Will just use generic Exception to catch all types of exception (this is not advisable specially in production environment – if ever use specific type exception).

Log.Logger = new LoggerConfiguration()
    .ReadFrom.Configuration(configuration)
    .CreateLogger();

Log.Information("SerilogDemo Args: {a}", args);

try
{
    var host = CreateHostBuilder(args).Build();
    Log.Information("Starting serilog demo service");
    host.Run();
    return 0;
}
catch (Exception ex)
{
    Log.Fatal(ex, "SerilogDemo service terminated unexpectedly");
    return 1;
}
finally
{
    Log.CloseAndFlush();
}

When all changes done, re-check all your codes above. Check whether its the same setup, if all is fine we move to the appsettings.json as that is our main key in this tutorial.

using System;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Serilog;

namespace SerilogDemo
{
    public sealed class Program
    {
        public static int Main(string[] args)
        {
            IConfigurationRoot configuration = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
                .Build();

            Log.Logger = new LoggerConfiguration()
                .ReadFrom.Configuration(configuration)
                .CreateLogger();

            Log.Information("Token Validator Args: {a}", args);

            try
            {
                var host = CreateHostBuilder(args).Build();
                Log.Information("Starting token validator service");
                host.Run();
                return 0;
            }
            catch (Exception ex)
            {
                Log.Fatal(ex, "Token validator service terminated unexpectedly");
                return 1;
            }
            finally
            {
                Log.CloseAndFlush();
            }
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>()
                        .UseSerilog();
                });
    }
}

On the appsettings.json, add the following JSON structure inside the root.

"Serilog": {
  "Using": [
    "Serilog.Sinks.Console",
    "Serilog.Sinks.File"
  ],
  "Enrich": [
    "FromLogContext",
    "WithMachineName",
    "WithThreadId"
  ],
  "WriteTo": [
    {
      "Name": "Console",
      "Args": {
        "outputTemplate": "[{Timestamp:HH:mm:ss} {Level}] {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}{NewLine}"
      }
    },
    {
      "Name": "File",
      "Args": {
        "path": "logs\\log_.txt",
        "rollingInterval": "Day"
      }
    }
  ]
}

Analyze the JSON structure, if you look carefully on the Using its specified to use the Console, and File. What does this mean is it will log on console and the specified log file.

Then on Enrich key, its specify what type of data needed to be added on the log and where to get it. The WriteTo key allows you to configure each sink (e.g. change format of log).

After that, we will now check if everything’s working fine by compiling it and running the program. If everything is okay, you should now see logs in your console and a file will be created containing the previous logs.

dotnet run

That’s all basically, if you want to use the Serilog on your controller check below.

(OPTIONAL)

If you want to use Serilog on a controller or any class, then what you will do is import first the abstract logging class.

using Microsoft.Extensions.Logging;

Then on controller or class constructor add the ILogger with the name of the current class. Below is the sample basic setup.

private readonly ILogger<InformHub> _logger;

public InformHub(ILogger<InformHub> logger)
{
    _logger = logger;
}

public async Task Command()
{
    _logger.LogInformation("Hello, World!");
}

To use the logger on class, just call the logger instance and add call to the method of log verbosity you want. On the above we use LogInformation or INFO verbosity.

(ANOTHER OPTIONAL)

This is for IIS, if you ever want to use a file logging you may encounter that on default setup it will not log to files. In order to log to files, first create the log directory (specified on the appsettings.json). Then change the folder’s ACL settings, add the the user IIS_USRS\<name-of-iis-site-name>.

Fig. 1: Properties folder

Add Write access to the log folder for the user IIS_USRS\<name-of-iis-site-name>.

Fig. 2: Advance permission editing modal

Then just restart the site and check if everything is working. If its still not working just redo it base on above.

That’s all guys!

Conclusion

Logging is one of the most important thing to do in development and production. Having no loggers in your web application is not advisable (with the exception of fully tested, high security application), loggers can easily point out easily what exception and details occurred on client side.

You can found the complete repository here.

Let me know in the comments if you have questions or queries, you can also DM me directly. Follow me for similar article, tips, and tricks ❤.


  1. Serilog is a diagnostic logging library for .NET applications. It is easy to set up, has a clean API, and runs on all recent .NET platforms. While it’s useful even in the simplest applications, Serilog’s support for structured logging shines when instrumenting complex, distributed, and asynchronous applications and systems. ↩︎
  2. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎
  3. .NET (previously named .NET Core) is a free and open-source, managed computer software framework for Windows, Linux, and macOS operating systems. It is a cross-platform successor to .NET Framework. The project is primarily developed by Microsoft employees by way of the .NET Foundation, and released under the MIT License. ↩︎

Quick Simple GraphQL Provider On ASP.NET Core 5.0

My most recent project tackles implementing GraphQL1 provider using C#. This is my first time implementing this stuff on C#, but I’ve already implemented it before on Java and also on Rust. This are the simple things I’ve learned while implementing a simple (hello world) GraphQL server on C#.


In your thoughts, you need to be selective. Thoughts are powerful vehicles of attention. Only think positive thoughts about yourself and your endeavors, and think well of the endeavors of others.

— Frederick Lenz.

Come on join me and lets dive in! ☄

Prerequisites

First of all, you must have a .NET Core 5.0 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or Linux with proper environment set.

If you are on Windows 10 and already have a Visual Studio2 2019, just update it to the most recent version, that way would ensure your system to have the latest .NET Core SDK version.

So where do we start?

First we create our ASP.NET3 Web API project on the command-line. Execute the command below to create the project.

dotnet new web -f net5.0 --no-https --name GqlNet5Demo

This command specifically creates a project with .NET Core 5.0 as target. The --no-https flag specifies we will be only working with non-SSL HTTP server config, and the type of project we generate is web (from an empty ASP.NET core template).

If the command is successful, we should now be able to see the folder GqlNet5Demo. Change directory on to it so we could start our changes to the template project.

cd GqlNet5Demo

Inside the project folder, we need to add now the base core of GraphQL.Net library and its default deserializer. Execute the command in an open shell:

dotnet add package GraphQL.Server.Transports.AspNetCore
dotnet add package GraphQL.Server.Transports.AspNetCore.SystemTextJson

Then this next package is optional only if you need GraphQL Websocket support, specially useful if you are implementing a subscription based GraphQL API. Anyways, for our project lets add this dependency.

dotnet add package GraphQL.Server.Transports.WebSockets

Also, add this other package which helps in debugging GraphQL statements on browser. This will install an embedded GraphQL Playground on our demo project, just don’t forget to remove this on a production server.

dotnet add package GraphQL.Server.Ui.Playground

After all those package installed, lets move on now on to editing our first file. Let’s create the file first named EhloSchema.cs and place it on the root folder. On the file, import the library namespace that we will be using.

using  GraphQL;
using  GraphQL.Resolvers;
using  GraphQL.Types;

After importing the needed libraries, we implement our root query type which will contain the query structure of our GraphQL schema. The query type is useful if you want to only read data.

public sealed class EhloQuery : ObjectGraphType
{
    public EhloQuery()
    {
        Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
    }
}

From the above we also implemented our first query type named “greet” which can be then called like this on the GraphQL playground.

query {
  greet
}

The instruction on creating a GraphQL type starts with Field or AddField following by type of field that will be returned and its required field the name and of course resolver.

If called on the GraphQL playground it would output a JSON with a data message containing “Hello, World”. To be able to run the GraphQL playground, let’s continue on the tutorial.

Still on the file EhloSchema.cs, add this instructions below in order for us to create our first schema. This schema will map the Query to our created class EhloQuery instance.

public sealed class EhloSchema : Schema
{
    public EhloSchema(IServiceProvider provider) : base(provider)
    {
        Query = new EhloQuery();
    }
}

That’s all for now on the EhloSchema.cs file! This is the most basic requirement needed in order to create a super basic GraphQL server.

Let’s now start modifying the Startup.cs file. Add this new imports which are needed for our constructor.

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;

This two imports allows us to use the IConfiguration and the IWebHostEnvironment abstract interface and their respective allowable methods. The next thing is implement our constructor and class scope variables. See below on what to implement.

public IConfiguration Configuration { get; }
public IWebHostEnvironment Environment { get; }

public Startup(IConfiguration configuration, IWebHostEnvironment environment)
{
    Configuration = configuration;
    Environment = environment;
}

After implementing the constructor, we also need to import the GraphQL base library.

using GraphQL.Server;

Then on the ConfigureServices method we add and build the GraphQL service.

services
    .AddSingleton<EhloSchema>()
    .AddGraphQL((options, provider) =>
    {
        options.EnableMetrics = Environment.IsDevelopment();

        var logger = provider.GetRequiredService<ILogger<Startup>>();
        options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
    })
    .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
    .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
    .AddWebSockets()
    .AddDataLoader()
    .AddGraphTypes(typeof(EhloSchema));

If you look at the instructions above we set and add first our Schema as a singleton class that will be initialize once. Then we set parameters to our GraphQL server, and set its default deserializer. Also, don’t forget we add websocket and dataloader to it. The dataloader is useful to prevent n+1 attacks that happen on GraphQL servers. More information can be found on this link.

We now need to implement calls to respective middlewares and activate the services. First is to activate the websocket protocol on our server, then also enable the GraphQL websocket middleware to inject our schema. The /graphql is the endpoint where the schema will be deployed.

app.UseWebSockets();
app.UseGraphQLWebSockets<EhloSchema>("/graphql");

app.UseGraphQL<EhloSchema>("/graphql");
app.UseGraphQLPlayground();

Don’t forget we need to activate also our GraphQL playground so we can use it on our demo GraphQL server. Here’s the full source of our Startup.cs, check whether if you forgot or missed something.

using GraphQL.Server;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

namespace GqlNet5Demo
{
    public class Startup
    {

        public IConfiguration Configuration { get; }
        public IWebHostEnvironment Environment { get; }

        public Startup(IConfiguration configuration, IWebHostEnvironment environment)
        {
            Configuration = configuration;
            Environment = environment;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            services
                .AddSingleton<EhloSchema>()
                .AddGraphQL((options, provider) =>
                {
                    options.EnableMetrics = Environment.IsDevelopment();

                    var logger = provider.GetRequiredService<ILogger<Startup>>();
                    options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
                })
                .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
                .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
                .AddWebSockets()
                .AddDataLoader()
                .AddGraphTypes(typeof(EhloSchema));
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseRouting();

            app.UseWebSockets();
            app.UseGraphQLWebSockets<EhloSchema>("/graphql");

            app.UseGraphQL<EhloSchema>("/graphql");
            app.UseGraphQLPlayground();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapGet("/", async context =>
                {
                    await context.Response.WriteAsync("Hello World!");
                });
            });
        }
    }
}

Now its time to run our ASP.NET GraphQL API server. Do that by executing the command on our previous opened shell:

dotnet run

If its all successful, then you should head out to http://localhost:<port>/ui/playground to access the GraphQL Playground. The <port> field pertains to the port indicated in the applicationUrl inside launchSettings.json that can be found inside your project.

If you encounter any problem, just try to re-check all the things we did above or check the full source at bottom of this article.

Our next step is to implement a complex query structure. We first need to implement this classes in our EhloSchema.cs.

public sealed class Message
{
    public string Content { get; set; }
    public DateTime CreatedAt { get; set; }
}

public sealed class MessageType : ObjectGraphType<Message>
{
    public MessageType()
    {
        Field(o => o.Content);
        Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
    }
}

This will create two classes which are Message and MessageType. The Message class will be our model class that will store data temporary into our program’s memory. And the MessageType will be the conversion from GraphQL type to our model class which is Message.

After that we need to implement this new field type on our EchoQuery constructor. This a simple example or returning low-complex type query on our server.

Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
{
    return new Message
    {
        Content = "Hello, World",
        CreatedAt = DateTime.UtcNow,
    };
});

Then to test it, we need to access our GraphQL Playground to execute this GraphQL statement.

query {
  greetComplex {
    content
    createdAt
  }
}

If everything is okay, it would return a JSON containing no error message and correct response with structure similar to Message data structure.

Next, we move to mutation type. The mutation type is specifically useful if you want to modify data, in CRUD it will be the CUD (Create, Update and Delete). We now need to create the root mutation type, just implement the following class below.

public sealed class EhloMutation : ObjectGraphType<object>
{
    public EhloMutation()
    {
        Field<StringGraphType>("greetMe",
                arguments: new QueryArguments(
                    new QueryArgument<StringGraphType>
                    {
                        Name = "name"
                    }),
                resolve: context =>
                {
                    string name = context.GetArgument<string>("name");
                    string message = $"Hello {name}!";
                    return message;
                });
    }
}

On the constructor, you’ll see we also implemented a field type that will return string and accepts one string argument. We also need to initialize this mutation class that we created on our main schema. Add the line below in the constructor of our EhloSchema class.

Mutation = new EhloMutation();

After implementing the mutation, build and run the whole project and go to GraphQL Playground to test our mutation. In our case the mutation doesn’t modify any stored data but just return a simple string appended by argument. The mutation statement starts with mutation instead of query.

mutation {
  greetMe(name: "Wick")
}

Next, we implement GraphQL subscription. The subscription on GraphQL is mostly used on events (e.g. someone registered, login notifications, system notifications, etc.) but mostly it can be use on anything that can be streamed.

Let’s implement it now on our EhloSchema.cs file.

public sealed class EhloSubscription : ObjectGraphType<object>
{
    public ISubject<string> greetValues = new ReplaySubject<string>(1);

    public EhloSubscription()
    {
        AddField(new EventStreamFieldType
        {
            Name = "greetCalled",
            Type = typeof(StringGraphType),
            Resolver = new FuncFieldResolver<string>(context =>
            {
                var message = context.Source as string;
                return message;
            }),
            Subscriber = new EventStreamResolver<string>(context =>
            {
                return greetValues.Select(message => message).AsObservable();
            }),
        });

        greetValues.OnNext("Hello, World");
    }
}

Similar to the Query and Mutation, will only implement simple event stream resolver and a subscriber listener. The greetCalled method will just return a simple string upon call on OnNext. Then on EhloSchema constructor same in mutation we also link the root subscription type.

Subscription = new EhloSubscription();

Then we test it on GraphQL Playground. In order to call a subscription type, we start by using the subscription statement.

subscription {
  greetCalled
}

Here’s the full source code of EhloSchema.cs file. You can re-check all the changes you did before and compare it to this. Also on this source, you’ll find that we also implemented a low-complex method in mutation that will return a structure on mutation. The mutation also accepts custom structure named MessageInputType.

using GraphQL;
using GraphQL.Resolvers;
using GraphQL.Types;
using System;
using System.Reactive.Linq;
using System.Reactive.Subjects;

namespace GqlNet5Demo
{
    public sealed class EhloSchema : Schema
    {
        public EhloSchema(IServiceProvider provider) : base(provider)
        {
            Query = new EhloQuery();
            Mutation = new EhloMutation();
            Subscription = new EhloSubscription();
        }
    }

    public sealed class Message
    {
        public string Content { get; set; }
        public DateTime CreatedAt { get; set; }
    }

    public sealed class MessageType : ObjectGraphType<Message>
    {
        public MessageType()
        {
            Field(o => o.Content);
            Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
        }
    }

    public sealed class EhloQuery : ObjectGraphType
    {
        public EhloQuery()
        {
            Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
            Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
            {
                return new Message
                {
                    Content = "Hello, World",
                    CreatedAt = DateTime.UtcNow,
                };
            });
        }
    }

    public sealed class MessageInputType : InputObjectGraphType
    {
        public MessageInputType()
        {
            Field<StringGraphType>("content");
            Field<DateTimeGraphType>("createdAt");
        }
    }

    public sealed class EhloMutation : ObjectGraphType<object>
    {
        public EhloMutation()
        {
            Field<StringGraphType>("greetMe",
                    arguments: new QueryArguments(
                        new QueryArgument<StringGraphType>
                        {
                            Name = "name"
                        }),
                    resolve: context =>
                    {
                        string name = context.GetArgument<string>("name");
                        string message = $"Hello {name}!";
                        return message;
                    });

            Field<MessageType>("echoMessageComplex",
                    arguments: new QueryArguments(
                        new QueryArgument<MessageInputType>
                        {
                            Name = "message"
                        }),
                    resolve: context =>
                    {
                        Message message = context.GetArgument<Message>("message");
                        return message;
                    });
        }
    }

    public sealed class EhloSubscription : ObjectGraphType<object>
    {
        public ISubject<string> greetValues = new ReplaySubject<string>(1);

        public EhloSubscription()
        {
            AddField(new EventStreamFieldType
            {
                Name = "greetCalled",
                Type = typeof(StringGraphType),
                Resolver = new FuncFieldResolver<string>(context =>
                {
                    var message = context.Source as string;
                    return message;
                }),
                Subscriber = new EventStreamResolver<string>(context =>
                {
                    return greetValues.Select(message => message).AsObservable();
                }),
            });

            greetValues.OnNext("Hello, World");
        }
    }
}

That’s all guys, after checking – build and run the whole project. 🙌

Conclusion

Implementing GraphQL seems a bit daunting at first, but if you know the internals of it you’ll reap many benefits by using it versus normal REST API endpoints. It’s not for this article to discuss the pros and cons of that. Anyways, as you can see its bit easy now to implement GraphQL on C# but I don’t see many enterprise switching over it as it will probably disrupt some of their services.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015. ↩︎
  2. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎
  3. ASP.NET is an open-source, server-side web-application framework designed for web development to produce dynamic web pages. It was developed by Microsoft to allow programmers to build dynamic web sites, applications and services. ↩︎