Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Software Development

Kintsugi Merge Testnet For Ethereum (ETH) Is Now Live

The Kintsugi testnet, the latest step in replacing Ethereum’s Proof-of-Work consensus method to Proof-of-Stake, has been deployed. The mainnet and beacon chains are expected to combine in Q1/Q2 of 2022. According to a release from ConsenSys, over 8.4 million ETH has been staked on Ethereum 2.0’s beacon chain.

Ethereum founding member Tim Beiko wrote in his announcement, “The Kintsugi testnet provides the community an opportunity to experiment with post-merge Ethereum and begin to identify any issues,”.

The Kintsugi testnet will help prepare for the “merge” to Ethereum’s 2.0. Following the merge, Ethereum 2.0 will move toward “Phase 2.” This will introduce sharding, a scalability feature that will improve fees and transaction times. Sharding is expected to arrive in late 2022.

Rebasing With Git

Rebasing is one of the features you probably want to have, if you plan to work on a neat git based project.


🍣 Where To Rebase?

If you know how many commits you make, to rebase you use git rebase with -i flag to enable interactive rebasing. The HEAD~<n> corresponds to the number of commits you have done (e.g. HEAD~4 if you have 4 commits to rollback to get to common ancestor commit).

git rebase -i HEAD~<n>

Sometimes, you commit a lot and forgot how many commits you’d make. To know the least common anscestor you have with master, you do git merge-base with your branch name as parameter.

git merge-base <your-branch> master

The above command will return a git hash which you can use on the git rebase command.

If you already know the git hash, then you can rollback to that specific commit and moving all current changes to unstaged. Once, the editor pop-ups you will choose which commit to retain, squash, and reword.

git rebase -i <git-ref-hash>

🍣 Merge Latest From Master

If you’ve already rebased your changes and needed to get lastest changes from master. All you have to do is rebase to the latest changes from master.
This command will do that.

git rebase origin/master

In any case, you’ve encountered some conflict first resolve it then continue in rebasing instead of creating new merge commit.

git rebase --continue

🍣 Overwriting Remote Repo Changes

Once all is done, overwrite your remote repo latest changes if you’ve pushed it. This will do a force push ignoring current ref on remote repo.

git push -f

🍣 Did Something Wrong? In Need Of Rollback

Did something wrong on merging conflicts? Don’t worry you can still see your previous changes using the command git reflog short for reference log.
You can checkout the reference hash then re-merge your changes.

git reflog

References

Simple Rust Mutation Relationship Diagram

Rust mutation can be somewhat confusing if your a beginner. Its similar to C++ way of doing things on where to put the asterisk (*) and ampersand (&) sign in variable declaration. Moving the asterisk sign and ampersand sign makes the declaration sometimes more mutable and also can make it less mutable.

Here is a simple diagram on Rust mutation that I found on StackOverflow (SO). I can’t find the exact link to reference as this one is stored in my notes.


        a: &T == const T* const a;      // can't mutate either
    mut a: &T == const T* a;            // can't mutate what is pointed to
    a: &mut T == T* const a;            // can't mutate pointer
mut a: &mut T == T* a;                  // can mutate both

Converting Rust String To And From

Rust &str and String is different in a sense that str is static, owned and fix sized while String can be dynamically allocated once and be converted to mutable to be appended. Most of the time you’ll be working with String on Rust when re-allocating and moving values between structs.


There are times you may need to convert dynamic string to char bytes and static string. Here are ways to do it:

From &str

  • &str -> String has many equally valid methods: String::from(st), st.to_string(), st.to_owned().
    • But I suggest you stick with one of them within a single project. The major advantage of String::from is that you can use it as an argument to a map method. So instead of x.map(|s| String::from(s)) you can often use x.map(String::from).
  • &str -> &[u8] is done by st.as_bytes()
  • &str -> Vec<u8> is a combination of &str -> &[u8] -> Vec<u8>, i.e. st.as_bytes().to_vec() or st.as_bytes().to_owned()

From String

  • String -> &str should just be &s where coercion is available or s.as_str() where it is not.
  • String -> &[u8] is the same as &str -> &[u8]: s.as_bytes()
  • String -> Vec<u8> has a custom method: s.into_bytes()

From &[u8]

  • &[u8] -> Vec<u8> is done by u.to_owned() or u.to_vec(). They do the same thing, but to_vec has the slight advantage of being unambiguous about the type it returns.
  • &[u8] -> &str doesn’t actually exist, that would be &[u8] -> Result<&str, Error>, provided via str::from_utf8(u)
  • &[u8] -> String is the combination of &[u8] -> Result<&str, Error> -> Result<String, Error>

From Vec<u8>

  • Vec<u8> -> &[u8] should be just &v where coercion is available, or as_slice where it’s not.
  • Vec<u8> -> &str is the same as Vec<u8> -> &[u8] -> Result<&str, Error> i.e. str::from_utf8(&v)
  • Vec<u8> -> String doesn’t actually exist, that would be Vec<u8> -> Result<String, Error> via String::from_utf8(v)

Coercion is available whenever the target is not generic but explicitly typed as &str or &[u8], respectively. The Rustonomicon has a chapter on coercions with more details about coercion sites.


tl;dr

&str    -> String  | String::from(s) or s.to_string() or s.to_owned()
&str    -> &[u8]   | s.as_bytes()
&str    -> Vec<u8> | s.as_bytes().to_vec() or s.as_bytes().to_owned()
String  -> &str    | &s if possible* else s.as_str()
String  -> &[u8]   | s.as_bytes()
String  -> Vec<u8> | s.into_bytes()
&[u8]   -> &str    | s.to_vec() or s.to_owned()
&[u8]   -> String  | std::str::from_utf8(s).unwrap(), but don't**
&[u8]   -> Vec<u8> | String::from_utf8(s).unwrap(), but don't**
Vec<u8> -> &str    | &s if possible* else s.as_slice()
Vec<u8> -> String  | std::str::from_utf8(&s).unwrap(), but don't**
Vec<u8> -> &[u8]   | String::from_utf8(s).unwrap(), but don't**

* target should have explicit type (i.e., checker can't infer that)

** handle the error properly instead

Install Istio to Docker Desktop (WSL 2)

Istio is an open source service mesh. The documentation for installing it on windows is a bit vague, and the website only provides documentation for nix based systems.

Install Istio

Download the istioctl.exe executable from the official istio releases page and extract then place it in a folder your choice. Next will be adding it to your environment variable.

Adding the to environment variable will be straight-forward, add the directory location of the istioctl to PATH.

Install the istio namespace and services in kubernetes.

istioctl install --set profile=default -y

Also set istio to automatically inject Envoy sidecar proxies when deploying applications and services to default namespace.

kubectl label namespace default istio-injection=enabled

Troubleshoot

Determine if Kubernetes cluster is running in an environment that supports external load balancer.

kubectl get svc istio-ingressgateway -n istio-system

Check if there are any problems presented in analyze.

istioctl analyze

Also, check the endpoint if its empty or returning any headers and data.

curl -H curl -s -I -HHost:keycloak.7f000101.nip.io http://127.0.0.1

  1. Service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between services or microservices, using a proxy.
  2. Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
  3. Istio is a service mesh—a modernized service networking layer that provides a transparent and language-independent way to flexibly and easily automate application network functions.

Dirty Logging With Serilog For ASP.NET 5

Here we are again on yet-another tutorial on using Serilog1 as your primary logging framework for both development as well as production.

If you look on the internet you’ll see there are many ways to integrate Serilog in your existing application, on this tutorial I’ll show you what I’ve been using for all the projects that I’ve handled.

You may ask what is the difference of my setup compared to others?


Programming isn’t about what you know; it’s about what you can figure out.

— Chris Pine.

The setup I’ll show you allows easy addition of new sinks and on the fly modification of log format without the need of recompiling your app again.

So how does that work?

Follow me and let’s dive on how to implement it. 👆

Prerequisites

First of all, you must have a .NET 5.0 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or Linux with proper environment set.

If you are on Windows 10 and already have a Visual Studio2 2019, just update it to the most recent version, that way would ensure your system to have the latest .NET Core SDK3 version.

So where do we start?

First, let’s create a test bed project. Type the command below on an existing shell console (e.g. bash, power shell, cmd, .etc).

dotnet new web -f net5.0 --no-https --name SerilogDemo

The command above will create a new project that will use .NET 5 framework. And from the above flags --no-https will setup the project to use non-secured (non-SSL) empty web API (it means will not generate a cert and an HTTPS URL).

After the project creation, change directory to the project root. And install the following nuget dependencies.

dotnet add package Serilog
dotnet add package Serilog.AspNetCore
dotnet add package Serilog.Extensions.Hosting
dotnet add package Serilog.Extensions.Logging
dotnet add package Serilog.Settings.Configuration
dotnet add package Serilog.Sinks.Console
dotnet add package Serilog.Sinks.File

Then we will now dive to C# implementation, first import the Serilog library on Program.cs.

using Serilog;

After that, still on the Program.cs file go to the method CreateHostBuilder and add /or chain the UseSerilog method to our application startup.

webBuilder.UseStartup<Startup>().UseSerilog();

The UseSerilog call will initialize the Serilog instance. Then we move to Serilog hooks to our main thread. On the Main method convert it first from void to int, the reason for this is to be able to return different numeric exit code when an error happens.

Then we initialize the appsettings configuration early on, normally this will be initialized on startup. The reason we initialize this first, is we will use the appsettings to provide configuration to Serilog logger instance that will be called on the this Main method.

IConfigurationRoot configuration = new ConfigurationBuilder()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
    .Build();

Next on, initialize the logger instance and handle further exceptions coming from the host builder creation. Will just use generic Exception to catch all types of exception (this is not advisable specially in production environment – if ever use specific type exception).

Log.Logger = new LoggerConfiguration()
    .ReadFrom.Configuration(configuration)
    .CreateLogger();

Log.Information("SerilogDemo Args: {a}", args);

try
{
    var host = CreateHostBuilder(args).Build();
    Log.Information("Starting serilog demo service");
    host.Run();
    return 0;
}
catch (Exception ex)
{
    Log.Fatal(ex, "SerilogDemo service terminated unexpectedly");
    return 1;
}
finally
{
    Log.CloseAndFlush();
}

When all changes done, re-check all your codes above. Check whether its the same setup, if all is fine we move to the appsettings.json as that is our main key in this tutorial.

using System;
using System.IO;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Serilog;

namespace SerilogDemo
{
    public sealed class Program
    {
        public static int Main(string[] args)
        {
            IConfigurationRoot configuration = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
                .Build();

            Log.Logger = new LoggerConfiguration()
                .ReadFrom.Configuration(configuration)
                .CreateLogger();

            Log.Information("Token Validator Args: {a}", args);

            try
            {
                var host = CreateHostBuilder(args).Build();
                Log.Information("Starting token validator service");
                host.Run();
                return 0;
            }
            catch (Exception ex)
            {
                Log.Fatal(ex, "Token validator service terminated unexpectedly");
                return 1;
            }
            finally
            {
                Log.CloseAndFlush();
            }
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>()
                        .UseSerilog();
                });
    }
}

On the appsettings.json, add the following JSON structure inside the root.

"Serilog": {
  "Using": [
    "Serilog.Sinks.Console",
    "Serilog.Sinks.File"
  ],
  "Enrich": [
    "FromLogContext",
    "WithMachineName",
    "WithThreadId"
  ],
  "WriteTo": [
    {
      "Name": "Console",
      "Args": {
        "outputTemplate": "[{Timestamp:HH:mm:ss} {Level}] {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}{NewLine}"
      }
    },
    {
      "Name": "File",
      "Args": {
        "path": "logs\\log_.txt",
        "rollingInterval": "Day"
      }
    }
  ]
}

Analyze the JSON structure, if you look carefully on the Using its specified to use the Console, and File. What does this mean is it will log on console and the specified log file.

Then on Enrich key, its specify what type of data needed to be added on the log and where to get it. The WriteTo key allows you to configure each sink (e.g. change format of log).

After that, we will now check if everything’s working fine by compiling it and running the program. If everything is okay, you should now see logs in your console and a file will be created containing the previous logs.

dotnet run

That’s all basically, if you want to use the Serilog on your controller check below.

(OPTIONAL)

If you want to use Serilog on a controller or any class, then what you will do is import first the abstract logging class.

using Microsoft.Extensions.Logging;

Then on controller or class constructor add the ILogger with the name of the current class. Below is the sample basic setup.

private readonly ILogger<InformHub> _logger;

public InformHub(ILogger<InformHub> logger)
{
    _logger = logger;
}

public async Task Command()
{
    _logger.LogInformation("Hello, World!");
}

To use the logger on class, just call the logger instance and add call to the method of log verbosity you want. On the above we use LogInformation or INFO verbosity.

(ANOTHER OPTIONAL)

This is for IIS, if you ever want to use a file logging you may encounter that on default setup it will not log to files. In order to log to files, first create the log directory (specified on the appsettings.json). Then change the folder’s ACL settings, add the the user IIS_USRS\<name-of-iis-site-name>.

Fig. 1: Properties folder

Add Write access to the log folder for the user IIS_USRS\<name-of-iis-site-name>.

Fig. 2: Advance permission editing modal

Then just restart the site and check if everything is working. If its still not working just redo it base on above.

That’s all guys!

Conclusion

Logging is one of the most important thing to do in development and production. Having no loggers in your web application is not advisable (with the exception of fully tested, high security application), loggers can easily point out easily what exception and details occurred on client side.

You can found the complete repository here.

Let me know in the comments if you have questions or queries, you can also DM me directly. Follow me for similar article, tips, and tricks ❤.


  1. Serilog is a diagnostic logging library for .NET applications. It is easy to set up, has a clean API, and runs on all recent .NET platforms. While it’s useful even in the simplest applications, Serilog’s support for structured logging shines when instrumenting complex, distributed, and asynchronous applications and systems. ↩︎
  2. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎
  3. .NET (previously named .NET Core) is a free and open-source, managed computer software framework for Windows, Linux, and macOS operating systems. It is a cross-platform successor to .NET Framework. The project is primarily developed by Microsoft employees by way of the .NET Foundation, and released under the MIT License. ↩︎

Quick Simple GraphQL Provider On ASP.NET Core 5.0

My most recent project tackles implementing GraphQL1 provider using C#. This is my first time implementing this stuff on C#, but I’ve already implemented it before on Java and also on Rust. This are the simple things I’ve learned while implementing a simple (hello world) GraphQL server on C#.


In your thoughts, you need to be selective. Thoughts are powerful vehicles of attention. Only think positive thoughts about yourself and your endeavors, and think well of the endeavors of others.

— Frederick Lenz.

Come on join me and lets dive in! ☄

Prerequisites

First of all, you must have a .NET Core 5.0 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or Linux with proper environment set.

If you are on Windows 10 and already have a Visual Studio2 2019, just update it to the most recent version, that way would ensure your system to have the latest .NET Core SDK version.

So where do we start?

First we create our ASP.NET3 Web API project on the command-line. Execute the command below to create the project.

dotnet new web -f net5.0 --no-https --name GqlNet5Demo

This command specifically creates a project with .NET Core 5.0 as target. The --no-https flag specifies we will be only working with non-SSL HTTP server config, and the type of project we generate is web (from an empty ASP.NET core template).

If the command is successful, we should now be able to see the folder GqlNet5Demo. Change directory on to it so we could start our changes to the template project.

cd GqlNet5Demo

Inside the project folder, we need to add now the base core of GraphQL.Net library and its default deserializer. Execute the command in an open shell:

dotnet add package GraphQL.Server.Transports.AspNetCore
dotnet add package GraphQL.Server.Transports.AspNetCore.SystemTextJson

Then this next package is optional only if you need GraphQL Websocket support, specially useful if you are implementing a subscription based GraphQL API. Anyways, for our project lets add this dependency.

dotnet add package GraphQL.Server.Transports.WebSockets

Also, add this other package which helps in debugging GraphQL statements on browser. This will install an embedded GraphQL Playground on our demo project, just don’t forget to remove this on a production server.

dotnet add package GraphQL.Server.Ui.Playground

After all those package installed, lets move on now on to editing our first file. Let’s create the file first named EhloSchema.cs and place it on the root folder. On the file, import the library namespace that we will be using.

using  GraphQL;
using  GraphQL.Resolvers;
using  GraphQL.Types;

After importing the needed libraries, we implement our root query type which will contain the query structure of our GraphQL schema. The query type is useful if you want to only read data.

public sealed class EhloQuery : ObjectGraphType
{
    public EhloQuery()
    {
        Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
    }
}

From the above we also implemented our first query type named “greet” which can be then called like this on the GraphQL playground.

query {
  greet
}

The instruction on creating a GraphQL type starts with Field or AddField following by type of field that will be returned and its required field the name and of course resolver.

If called on the GraphQL playground it would output a JSON with a data message containing “Hello, World”. To be able to run the GraphQL playground, let’s continue on the tutorial.

Still on the file EhloSchema.cs, add this instructions below in order for us to create our first schema. This schema will map the Query to our created class EhloQuery instance.

public sealed class EhloSchema : Schema
{
    public EhloSchema(IServiceProvider provider) : base(provider)
    {
        Query = new EhloQuery();
    }
}

That’s all for now on the EhloSchema.cs file! This is the most basic requirement needed in order to create a super basic GraphQL server.

Let’s now start modifying the Startup.cs file. Add this new imports which are needed for our constructor.

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;

This two imports allows us to use the IConfiguration and the IWebHostEnvironment abstract interface and their respective allowable methods. The next thing is implement our constructor and class scope variables. See below on what to implement.

public IConfiguration Configuration { get; }
public IWebHostEnvironment Environment { get; }

public Startup(IConfiguration configuration, IWebHostEnvironment environment)
{
    Configuration = configuration;
    Environment = environment;
}

After implementing the constructor, we also need to import the GraphQL base library.

using GraphQL.Server;

Then on the ConfigureServices method we add and build the GraphQL service.

services
    .AddSingleton<EhloSchema>()
    .AddGraphQL((options, provider) =>
    {
        options.EnableMetrics = Environment.IsDevelopment();

        var logger = provider.GetRequiredService<ILogger<Startup>>();
        options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
    })
    .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
    .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
    .AddWebSockets()
    .AddDataLoader()
    .AddGraphTypes(typeof(EhloSchema));

If you look at the instructions above we set and add first our Schema as a singleton class that will be initialize once. Then we set parameters to our GraphQL server, and set its default deserializer. Also, don’t forget we add websocket and dataloader to it. The dataloader is useful to prevent n+1 attacks that happen on GraphQL servers. More information can be found on this link.

We now need to implement calls to respective middlewares and activate the services. First is to activate the websocket protocol on our server, then also enable the GraphQL websocket middleware to inject our schema. The /graphql is the endpoint where the schema will be deployed.

app.UseWebSockets();
app.UseGraphQLWebSockets<EhloSchema>("/graphql");

app.UseGraphQL<EhloSchema>("/graphql");
app.UseGraphQLPlayground();

Don’t forget we need to activate also our GraphQL playground so we can use it on our demo GraphQL server. Here’s the full source of our Startup.cs, check whether if you forgot or missed something.

using GraphQL.Server;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

namespace GqlNet5Demo
{
    public class Startup
    {

        public IConfiguration Configuration { get; }
        public IWebHostEnvironment Environment { get; }

        public Startup(IConfiguration configuration, IWebHostEnvironment environment)
        {
            Configuration = configuration;
            Environment = environment;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            services
                .AddSingleton<EhloSchema>()
                .AddGraphQL((options, provider) =>
                {
                    options.EnableMetrics = Environment.IsDevelopment();

                    var logger = provider.GetRequiredService<ILogger<Startup>>();
                    options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
                })
                .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
                .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
                .AddWebSockets()
                .AddDataLoader()
                .AddGraphTypes(typeof(EhloSchema));
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseRouting();

            app.UseWebSockets();
            app.UseGraphQLWebSockets<EhloSchema>("/graphql");

            app.UseGraphQL<EhloSchema>("/graphql");
            app.UseGraphQLPlayground();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapGet("/", async context =>
                {
                    await context.Response.WriteAsync("Hello World!");
                });
            });
        }
    }
}

Now its time to run our ASP.NET GraphQL API server. Do that by executing the command on our previous opened shell:

dotnet run

If its all successful, then you should head out to http://localhost:<port>/ui/playground to access the GraphQL Playground. The <port> field pertains to the port indicated in the applicationUrl inside launchSettings.json that can be found inside your project.

If you encounter any problem, just try to re-check all the things we did above or check the full source at bottom of this article.

Our next step is to implement a complex query structure. We first need to implement this classes in our EhloSchema.cs.

public sealed class Message
{
    public string Content { get; set; }
    public DateTime CreatedAt { get; set; }
}

public sealed class MessageType : ObjectGraphType<Message>
{
    public MessageType()
    {
        Field(o => o.Content);
        Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
    }
}

This will create two classes which are Message and MessageType. The Message class will be our model class that will store data temporary into our program’s memory. And the MessageType will be the conversion from GraphQL type to our model class which is Message.

After that we need to implement this new field type on our EchoQuery constructor. This a simple example or returning low-complex type query on our server.

Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
{
    return new Message
    {
        Content = "Hello, World",
        CreatedAt = DateTime.UtcNow,
    };
});

Then to test it, we need to access our GraphQL Playground to execute this GraphQL statement.

query {
  greetComplex {
    content
    createdAt
  }
}

If everything is okay, it would return a JSON containing no error message and correct response with structure similar to Message data structure.

Next, we move to mutation type. The mutation type is specifically useful if you want to modify data, in CRUD it will be the CUD (Create, Update and Delete). We now need to create the root mutation type, just implement the following class below.

public sealed class EhloMutation : ObjectGraphType<object>
{
    public EhloMutation()
    {
        Field<StringGraphType>("greetMe",
                arguments: new QueryArguments(
                    new QueryArgument<StringGraphType>
                    {
                        Name = "name"
                    }),
                resolve: context =>
                {
                    string name = context.GetArgument<string>("name");
                    string message = $"Hello {name}!";
                    return message;
                });
    }
}

On the constructor, you’ll see we also implemented a field type that will return string and accepts one string argument. We also need to initialize this mutation class that we created on our main schema. Add the line below in the constructor of our EhloSchema class.

Mutation = new EhloMutation();

After implementing the mutation, build and run the whole project and go to GraphQL Playground to test our mutation. In our case the mutation doesn’t modify any stored data but just return a simple string appended by argument. The mutation statement starts with mutation instead of query.

mutation {
  greetMe(name: "Wick")
}

Next, we implement GraphQL subscription. The subscription on GraphQL is mostly used on events (e.g. someone registered, login notifications, system notifications, etc.) but mostly it can be use on anything that can be streamed.

Let’s implement it now on our EhloSchema.cs file.

public sealed class EhloSubscription : ObjectGraphType<object>
{
    public ISubject<string> greetValues = new ReplaySubject<string>(1);

    public EhloSubscription()
    {
        AddField(new EventStreamFieldType
        {
            Name = "greetCalled",
            Type = typeof(StringGraphType),
            Resolver = new FuncFieldResolver<string>(context =>
            {
                var message = context.Source as string;
                return message;
            }),
            Subscriber = new EventStreamResolver<string>(context =>
            {
                return greetValues.Select(message => message).AsObservable();
            }),
        });

        greetValues.OnNext("Hello, World");
    }
}

Similar to the Query and Mutation, will only implement simple event stream resolver and a subscriber listener. The greetCalled method will just return a simple string upon call on OnNext. Then on EhloSchema constructor same in mutation we also link the root subscription type.

Subscription = new EhloSubscription();

Then we test it on GraphQL Playground. In order to call a subscription type, we start by using the subscription statement.

subscription {
  greetCalled
}

Here’s the full source code of EhloSchema.cs file. You can re-check all the changes you did before and compare it to this. Also on this source, you’ll find that we also implemented a low-complex method in mutation that will return a structure on mutation. The mutation also accepts custom structure named MessageInputType.

using GraphQL;
using GraphQL.Resolvers;
using GraphQL.Types;
using System;
using System.Reactive.Linq;
using System.Reactive.Subjects;

namespace GqlNet5Demo
{
    public sealed class EhloSchema : Schema
    {
        public EhloSchema(IServiceProvider provider) : base(provider)
        {
            Query = new EhloQuery();
            Mutation = new EhloMutation();
            Subscription = new EhloSubscription();
        }
    }

    public sealed class Message
    {
        public string Content { get; set; }
        public DateTime CreatedAt { get; set; }
    }

    public sealed class MessageType : ObjectGraphType<Message>
    {
        public MessageType()
        {
            Field(o => o.Content);
            Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
        }
    }

    public sealed class EhloQuery : ObjectGraphType
    {
        public EhloQuery()
        {
            Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
            Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
            {
                return new Message
                {
                    Content = "Hello, World",
                    CreatedAt = DateTime.UtcNow,
                };
            });
        }
    }

    public sealed class MessageInputType : InputObjectGraphType
    {
        public MessageInputType()
        {
            Field<StringGraphType>("content");
            Field<DateTimeGraphType>("createdAt");
        }
    }

    public sealed class EhloMutation : ObjectGraphType<object>
    {
        public EhloMutation()
        {
            Field<StringGraphType>("greetMe",
                    arguments: new QueryArguments(
                        new QueryArgument<StringGraphType>
                        {
                            Name = "name"
                        }),
                    resolve: context =>
                    {
                        string name = context.GetArgument<string>("name");
                        string message = $"Hello {name}!";
                        return message;
                    });

            Field<MessageType>("echoMessageComplex",
                    arguments: new QueryArguments(
                        new QueryArgument<MessageInputType>
                        {
                            Name = "message"
                        }),
                    resolve: context =>
                    {
                        Message message = context.GetArgument<Message>("message");
                        return message;
                    });
        }
    }

    public sealed class EhloSubscription : ObjectGraphType<object>
    {
        public ISubject<string> greetValues = new ReplaySubject<string>(1);

        public EhloSubscription()
        {
            AddField(new EventStreamFieldType
            {
                Name = "greetCalled",
                Type = typeof(StringGraphType),
                Resolver = new FuncFieldResolver<string>(context =>
                {
                    var message = context.Source as string;
                    return message;
                }),
                Subscriber = new EventStreamResolver<string>(context =>
                {
                    return greetValues.Select(message => message).AsObservable();
                }),
            });

            greetValues.OnNext("Hello, World");
        }
    }
}

That’s all guys, after checking – build and run the whole project. 🙌

Conclusion

Implementing GraphQL seems a bit daunting at first, but if you know the internals of it you’ll reap many benefits by using it versus normal REST API endpoints. It’s not for this article to discuss the pros and cons of that. Anyways, as you can see its bit easy now to implement GraphQL on C# but I don’t see many enterprise switching over it as it will probably disrupt some of their services.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015. ↩︎
  2. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎
  3. ASP.NET is an open-source, server-side web-application framework designed for web development to produce dynamic web pages. It was developed by Microsoft to allow programmers to build dynamic web sites, applications and services. ↩︎

Find Unintended CSS Overflow

Ever coded an HTML static design site? I’m sure you’ve encountered problems where you’ve just installed some CSS framework then coded some additional CSS files. After you’ve finished the code you run it on a browser then TADAHHH!


You can only find truth with logic if you have already found truth without it.

— Gilbert Keith Chesterton.

There were overflows horizontal and vertical scrollbars. 😁

It’s a bit cumbersome to find where those scrollbar or overflow originated, so that’s what this article is for. In order to find those overflows, here’s a one liner that you can directly paste on your web developer console or just press F12 then the console tab. The console tab is the place where you look at javascript logs (console.log), anyways just copy this then analyze it first before pasting on your console.

javascript:void(function () {var docWidth = document.documentElement.offsetWidth;[].forEach.call(document.querySelectorAll('*'), function (el) {if (el.offsetWidth &gt; docWidth) console.log(el)})})();

Here is the full script which had been beautified for readability:

var docWidth =  document.documentElement.offsetWidth;
[].forEach.call(document.querySelectorAll("*"), function(el) {
  if (el.offsetWidth > docWidth) {
    console.log(el);
  }
});

If you analyze carefully the code, you’ll see that on the first instruction it gets the current document offset width. Then it loops on all selector which are class and id’s, if it found an offending or greater than the offset width it will print that element attributes and location.

That’s all guys!

Let me know in the comments if you have questions or queries, you can also DM me directly.

Also follow me for similar article, tips, and tricks ❤.

Make Subfolder A Git Submodule

Ever been in a situation where a sub-folder of your git repository needs to branch out as a new repository? Here in this article I’ve tried a new way using a python module to simplify the process. This steps is also recommended by the Git core team as ways to move a sub-folder to a new clean repository (link).


We’re just clones, sir. We’re meant to be expendable.

— Sinker.

Come on let’s jump in! 🚀

Prerequisites

First of all, you must have a Git1 on your machine. Second, must have existing test git repository and Python 32 installed.

If you don’t have Git yet, you can install git from its official sources, its available on all platforms even on android. Or if you have Visual Studio3 installed, just locate it from your drive. Python can also be installed using the Visual Studio installer.

So where do we start?

This will be our initial test repository structure:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

The first step you need to do is clone the test repository by either copying it by cp command or by creating a duplicate cloned copy using git clone.

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ test-repository-copy
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

Then install this python module named git-filter-repo. Install the module using the pip utility.

pip3 install git-filter-repo

This git-filter-repo simplifies the process of filtering files, directories and history. This tool as said on its Github page falls on the same category as git-filter-branch. You can check its Github repository page for pro’s and con’s against similar tools.

Next thing we do is go into the cloned test repository and filter the directory you want (in our case its the desired-directory) to separate into a new repository.

cd test-repository-copy
git filter-repo --path desired-directory --subdirectory-filter desired-directory

This will modify the cloned directory history and delete existing content that does not match the subdirectory filter. The new structure of the directory will be like this:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ desired-directory
  | |
  | +-+ contents

The desired-directory will now become its own repository retaining the history of files that is inside.

After moving the sub-folder to its own repository, we go back to our original test repository and delete the filtered directory.

cd test-repository
git rm -rf desired-directory

Still on the test-repository, create a new git submodule and link the filtered directory repository.

git submodule add ../desired-directory desired-directory

That’s all the steps needed, check if everything is working. Check the quick review below for a summarized setup.

Quick Review

Here are the simplified steps based on the above:

  1. Make a copy or clone the current project where the sub-folder is located.
  2. Install git-filter-repo using the command pip3 install git-filter-repo.
  3. On the cloned project folder, filter it base on the directory you want to make new repository with the command git filter-repo --path <new-path> --subdirectory-filter <filtered-directory>
  4. Go to your current project folder and delete the sub-folder using the command git rm -rf <filtered-directory>.
  5. On the current project create the sub-module using git submodule add <new-repo-url> <filtered-directory>.
  6. Check if everything is okay.

That’s all guys, always make a backup of your repository before proceeding. Anyways its on git version control system – you can go back and re-fix if there is something wrong.

Conclusion

There are many answers in the internet regarding this matter, but mostly they don’t explain what will occur when you run this command, this one I’ve personally tried it as before I was using a mono repository setup. But it became so large that its hard to maintain especially on cases of testing and checking the history.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. Git (/ɡɪt/) is a distributed version-control system for tracking changes in any set of files, originally designed for coordinating work among programmers cooperating on source code during software development. ↩︎
  2. Python is an interpreted, high-level and general-purpose programming language. Python’s design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. ↩︎
  3. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎

Simple Way To Determine Pluralization Of Package Name In Java And Rust

Ever been in a situation where you’re getting hard time naming the package of your code? For me its always, probably because I do have partial OCD/OCPD (Obsessive Compulsive Personality Disorder)1.

Anyways, I’ve found this answer before on StackExchange hopefully its still there and I’m using it ever since in naming my namespaces and packages.


It’s double the giggles and double the grins, and double the trouble if you’re blessed with twins.

— Anonymous.

Just in-case it might get deleted here is the full answer:


Use the plural for packages with homogeneous contents and the singular for packages with heterogeneous contents.

A class is similar to a database relation. A database relation should be named in the singular as its records are considered to be instances of the relation. The function of a relation is to compose a complex record from simple data.

A package, on the other hand, is not a data abstraction. It assists with organization of code and resolution of naming conflicts. If a package is named in the singular, it doesn’t mean that each member of the package is an instance of the package; it contains related but heterogeneous concepts. If it is named in the plural (as they often are), I would expect that the package contains homogeneous concepts.

For example, a type should be named TaskCollection instead of TasksCollection, as it is a collection containing instances of a Task. A package named com.myproject.task does not mean that each contained class is an instance of a task. There might be a TaskHandler, a TaskFactory, etc. A package named com.myproject.tasks, however, would contain different types that are all tasks: TakeOutGarbageTask, DoTheDishesTask, etc.


Here is the original answer link from Software Engineering StackExchange.
Should package names be singular or plural?

Check the original answer on stack exchange if there is an update.


  1. Obsessive–compulsive personality disorder (OCPD) is a cluster C personality disorder marked by an excessive need for orderliness, neatness, and perfectionism. Symptoms are usually present by the time a person reaches adulthood, and are visible in a variety of situations. ↩︎