Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Development

Quick Simple GraphQL Provider On ASP.NET Core 5.0

My most recent project tackles implementing GraphQL1 provider using C#. This is my first time implementing this stuff on C#, but I’ve already implemented it before on Java and also on Rust. This are the simple things I’ve learned while implementing a simple (hello world) GraphQL server on C#.


In your thoughts, you need to be selective. Thoughts are powerful vehicles of attention. Only think positive thoughts about yourself and your endeavors, and think well of the endeavors of others.

— Frederick Lenz.

Come on join me and lets dive in! ☄

Prerequisites

First of all, you must have a .NET Core 5.0 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or Linux with proper environment set.

If you are on Windows 10 and already have a Visual Studio2 2019, just update it to the most recent version, that way would ensure your system to have the latest .NET Core SDK version.

So where do we start?

First we create our ASP.NET3 Web API project on the command-line. Execute the command below to create the project.

dotnet new web -f net5.0 --no-https --name GqlNet5Demo

This command specifically creates a project with .NET Core 5.0 as target. The --no-https flag specifies we will be only working with non-SSL HTTP server config, and the type of project we generate is web (from an empty ASP.NET core template).

If the command is successful, we should now be able to see the folder GqlNet5Demo. Change directory on to it so we could start our changes to the template project.

cd GqlNet5Demo

Inside the project folder, we need to add now the base core of GraphQL.Net library and its default deserializer. Execute the command in an open shell:

dotnet add package GraphQL.Server.Transports.AspNetCore
dotnet add package GraphQL.Server.Transports.AspNetCore.SystemTextJson

Then this next package is optional only if you need GraphQL Websocket support, specially useful if you are implementing a subscription based GraphQL API. Anyways, for our project lets add this dependency.

dotnet add package GraphQL.Server.Transports.WebSockets

Also, add this other package which helps in debugging GraphQL statements on browser. This will install an embedded GraphQL Playground on our demo project, just don’t forget to remove this on a production server.

dotnet add package GraphQL.Server.Ui.Playground

After all those package installed, lets move on now on to editing our first file. Let’s create the file first named EhloSchema.cs and place it on the root folder. On the file, import the library namespace that we will be using.

using  GraphQL;
using  GraphQL.Resolvers;
using  GraphQL.Types;

After importing the needed libraries, we implement our root query type which will contain the query structure of our GraphQL schema. The query type is useful if you want to only read data.

public sealed class EhloQuery : ObjectGraphType
{
    public EhloQuery()
    {
        Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
    }
}

From the above we also implemented our first query type named “greet” which can be then called like this on the GraphQL playground.

query {
  greet
}

The instruction on creating a GraphQL type starts with Field or AddField following by type of field that will be returned and its required field the name and of course resolver.

If called on the GraphQL playground it would output a JSON with a data message containing “Hello, World”. To be able to run the GraphQL playground, let’s continue on the tutorial.

Still on the file EhloSchema.cs, add this instructions below in order for us to create our first schema. This schema will map the Query to our created class EhloQuery instance.

public sealed class EhloSchema : Schema
{
    public EhloSchema(IServiceProvider provider) : base(provider)
    {
        Query = new EhloQuery();
    }
}

That’s all for now on the EhloSchema.cs file! This is the most basic requirement needed in order to create a super basic GraphQL server.

Let’s now start modifying the Startup.cs file. Add this new imports which are needed for our constructor.

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;

This two imports allows us to use the IConfiguration and the IWebHostEnvironment abstract interface and their respective allowable methods. The next thing is implement our constructor and class scope variables. See below on what to implement.

public IConfiguration Configuration { get; }
public IWebHostEnvironment Environment { get; }

public Startup(IConfiguration configuration, IWebHostEnvironment environment)
{
    Configuration = configuration;
    Environment = environment;
}

After implementing the constructor, we also need to import the GraphQL base library.

using GraphQL.Server;

Then on the ConfigureServices method we add and build the GraphQL service.

services
    .AddSingleton<EhloSchema>()
    .AddGraphQL((options, provider) =>
    {
        options.EnableMetrics = Environment.IsDevelopment();

        var logger = provider.GetRequiredService<ILogger<Startup>>();
        options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
    })
    .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
    .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
    .AddWebSockets()
    .AddDataLoader()
    .AddGraphTypes(typeof(EhloSchema));

If you look at the instructions above we set and add first our Schema as a singleton class that will be initialize once. Then we set parameters to our GraphQL server, and set its default deserializer. Also, don’t forget we add websocket and dataloader to it. The dataloader is useful to prevent n+1 attacks that happen on GraphQL servers. More information can be found on this link.

We now need to implement calls to respective middlewares and activate the services. First is to activate the websocket protocol on our server, then also enable the GraphQL websocket middleware to inject our schema. The /graphql is the endpoint where the schema will be deployed.

app.UseWebSockets();
app.UseGraphQLWebSockets<EhloSchema>("/graphql");

app.UseGraphQL<EhloSchema>("/graphql");
app.UseGraphQLPlayground();

Don’t forget we need to activate also our GraphQL playground so we can use it on our demo GraphQL server. Here’s the full source of our Startup.cs, check whether if you forgot or missed something.

using GraphQL.Server;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

namespace GqlNet5Demo
{
    public class Startup
    {

        public IConfiguration Configuration { get; }
        public IWebHostEnvironment Environment { get; }

        public Startup(IConfiguration configuration, IWebHostEnvironment environment)
        {
            Configuration = configuration;
            Environment = environment;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            services
                .AddSingleton<EhloSchema>()
                .AddGraphQL((options, provider) =>
                {
                    options.EnableMetrics = Environment.IsDevelopment();

                    var logger = provider.GetRequiredService<ILogger<Startup>>();
                    options.UnhandledExceptionDelegate = ctx => logger.LogError("{Error} occured", ctx.OriginalException.Message);
                })
                .AddSystemTextJson(deserializerSettings => { }, serializerSettings => { })
                .AddErrorInfoProvider(opt => opt.ExposeExceptionStackTrace = Environment.IsDevelopment())
                .AddWebSockets()
                .AddDataLoader()
                .AddGraphTypes(typeof(EhloSchema));
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseRouting();

            app.UseWebSockets();
            app.UseGraphQLWebSockets<EhloSchema>("/graphql");

            app.UseGraphQL<EhloSchema>("/graphql");
            app.UseGraphQLPlayground();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapGet("/", async context =>
                {
                    await context.Response.WriteAsync("Hello World!");
                });
            });
        }
    }
}

Now its time to run our ASP.NET GraphQL API server. Do that by executing the command on our previous opened shell:

dotnet run

If its all successful, then you should head out to http://localhost:<port>/ui/playground to access the GraphQL Playground. The <port> field pertains to the port indicated in the applicationUrl inside launchSettings.json that can be found inside your project.

If you encounter any problem, just try to re-check all the things we did above or check the full source at bottom of this article.

Our next step is to implement a complex query structure. We first need to implement this classes in our EhloSchema.cs.

public sealed class Message
{
    public string Content { get; set; }
    public DateTime CreatedAt { get; set; }
}

public sealed class MessageType : ObjectGraphType<Message>
{
    public MessageType()
    {
        Field(o => o.Content);
        Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
    }
}

This will create two classes which are Message and MessageType. The Message class will be our model class that will store data temporary into our program’s memory. And the MessageType will be the conversion from GraphQL type to our model class which is Message.

After that we need to implement this new field type on our EchoQuery constructor. This a simple example or returning low-complex type query on our server.

Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
{
    return new Message
    {
        Content = "Hello, World",
        CreatedAt = DateTime.UtcNow,
    };
});

Then to test it, we need to access our GraphQL Playground to execute this GraphQL statement.

query {
  greetComplex {
    content
    createdAt
  }
}

If everything is okay, it would return a JSON containing no error message and correct response with structure similar to Message data structure.

Next, we move to mutation type. The mutation type is specifically useful if you want to modify data, in CRUD it will be the CUD (Create, Update and Delete). We now need to create the root mutation type, just implement the following class below.

public sealed class EhloMutation : ObjectGraphType<object>
{
    public EhloMutation()
    {
        Field<StringGraphType>("greetMe",
                arguments: new QueryArguments(
                    new QueryArgument<StringGraphType>
                    {
                        Name = "name"
                    }),
                resolve: context =>
                {
                    string name = context.GetArgument<string>("name");
                    string message = $"Hello {name}!";
                    return message;
                });
    }
}

On the constructor, you’ll see we also implemented a field type that will return string and accepts one string argument. We also need to initialize this mutation class that we created on our main schema. Add the line below in the constructor of our EhloSchema class.

Mutation = new EhloMutation();

After implementing the mutation, build and run the whole project and go to GraphQL Playground to test our mutation. In our case the mutation doesn’t modify any stored data but just return a simple string appended by argument. The mutation statement starts with mutation instead of query.

mutation {
  greetMe(name: "Wick")
}

Next, we implement GraphQL subscription. The subscription on GraphQL is mostly used on events (e.g. someone registered, login notifications, system notifications, etc.) but mostly it can be use on anything that can be streamed.

Let’s implement it now on our EhloSchema.cs file.

public sealed class EhloSubscription : ObjectGraphType<object>
{
    public ISubject<string> greetValues = new ReplaySubject<string>(1);

    public EhloSubscription()
    {
        AddField(new EventStreamFieldType
        {
            Name = "greetCalled",
            Type = typeof(StringGraphType),
            Resolver = new FuncFieldResolver<string>(context =>
            {
                var message = context.Source as string;
                return message;
            }),
            Subscriber = new EventStreamResolver<string>(context =>
            {
                return greetValues.Select(message => message).AsObservable();
            }),
        });

        greetValues.OnNext("Hello, World");
    }
}

Similar to the Query and Mutation, will only implement simple event stream resolver and a subscriber listener. The greetCalled method will just return a simple string upon call on OnNext. Then on EhloSchema constructor same in mutation we also link the root subscription type.

Subscription = new EhloSubscription();

Then we test it on GraphQL Playground. In order to call a subscription type, we start by using the subscription statement.

subscription {
  greetCalled
}

Here’s the full source code of EhloSchema.cs file. You can re-check all the changes you did before and compare it to this. Also on this source, you’ll find that we also implemented a low-complex method in mutation that will return a structure on mutation. The mutation also accepts custom structure named MessageInputType.

using GraphQL;
using GraphQL.Resolvers;
using GraphQL.Types;
using System;
using System.Reactive.Linq;
using System.Reactive.Subjects;

namespace GqlNet5Demo
{
    public sealed class EhloSchema : Schema
    {
        public EhloSchema(IServiceProvider provider) : base(provider)
        {
            Query = new EhloQuery();
            Mutation = new EhloMutation();
            Subscription = new EhloSubscription();
        }
    }

    public sealed class Message
    {
        public string Content { get; set; }
        public DateTime CreatedAt { get; set; }
    }

    public sealed class MessageType : ObjectGraphType<Message>
    {
        public MessageType()
        {
            Field(o => o.Content);
            Field(o => o.CreatedAt, type: typeof(DateTimeGraphType));
        }
    }

    public sealed class EhloQuery : ObjectGraphType
    {
        public EhloQuery()
        {
            Field<StringGraphType>("greet", description: "A type that returns a simple hello world string", resolve: context => "Hello, World");
            Field<MessageType>("greetComplex", description: "A type that returns a complex data", resolve: context =>
            {
                return new Message
                {
                    Content = "Hello, World",
                    CreatedAt = DateTime.UtcNow,
                };
            });
        }
    }

    public sealed class MessageInputType : InputObjectGraphType
    {
        public MessageInputType()
        {
            Field<StringGraphType>("content");
            Field<DateTimeGraphType>("createdAt");
        }
    }

    public sealed class EhloMutation : ObjectGraphType<object>
    {
        public EhloMutation()
        {
            Field<StringGraphType>("greetMe",
                    arguments: new QueryArguments(
                        new QueryArgument<StringGraphType>
                        {
                            Name = "name"
                        }),
                    resolve: context =>
                    {
                        string name = context.GetArgument<string>("name");
                        string message = $"Hello {name}!";
                        return message;
                    });

            Field<MessageType>("echoMessageComplex",
                    arguments: new QueryArguments(
                        new QueryArgument<MessageInputType>
                        {
                            Name = "message"
                        }),
                    resolve: context =>
                    {
                        Message message = context.GetArgument<Message>("message");
                        return message;
                    });
        }
    }

    public sealed class EhloSubscription : ObjectGraphType<object>
    {
        public ISubject<string> greetValues = new ReplaySubject<string>(1);

        public EhloSubscription()
        {
            AddField(new EventStreamFieldType
            {
                Name = "greetCalled",
                Type = typeof(StringGraphType),
                Resolver = new FuncFieldResolver<string>(context =>
                {
                    var message = context.Source as string;
                    return message;
                }),
                Subscriber = new EventStreamResolver<string>(context =>
                {
                    return greetValues.Select(message => message).AsObservable();
                }),
            });

            greetValues.OnNext("Hello, World");
        }
    }
}

That’s all guys, after checking – build and run the whole project. 🙌

Conclusion

Implementing GraphQL seems a bit daunting at first, but if you know the internals of it you’ll reap many benefits by using it versus normal REST API endpoints. It’s not for this article to discuss the pros and cons of that. Anyways, as you can see its bit easy now to implement GraphQL on C# but I don’t see many enterprise switching over it as it will probably disrupt some of their services.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015. ↩︎
  2. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎
  3. ASP.NET is an open-source, server-side web-application framework designed for web development to produce dynamic web pages. It was developed by Microsoft to allow programmers to build dynamic web sites, applications and services. ↩︎

Find Unintended CSS Overflow

Ever coded an HTML static design site? I’m sure you’ve encountered problems where you’ve just installed some CSS framework then coded some additional CSS files. After you’ve finished the code you run it on a browser then TADAHHH!


You can only find truth with logic if you have already found truth without it.

— Gilbert Keith Chesterton.

There were overflows horizontal and vertical scrollbars. 😁

It’s a bit cumbersome to find where those scrollbar or overflow originated, so that’s what this article is for. In order to find those overflows, here’s a one liner that you can directly paste on your web developer console or just press F12 then the console tab. The console tab is the place where you look at javascript logs (console.log), anyways just copy this then analyze it first before pasting on your console.

javascript:void(function () {var docWidth = document.documentElement.offsetWidth;[].forEach.call(document.querySelectorAll('*'), function (el) {if (el.offsetWidth &gt; docWidth) console.log(el)})})();

Here is the full script which had been beautified for readability:

var docWidth =  document.documentElement.offsetWidth;
[].forEach.call(document.querySelectorAll("*"), function(el) {
  if (el.offsetWidth > docWidth) {
    console.log(el);
  }
});

If you analyze carefully the code, you’ll see that on the first instruction it gets the current document offset width. Then it loops on all selector which are class and id’s, if it found an offending or greater than the offset width it will print that element attributes and location.

That’s all guys!

Let me know in the comments if you have questions or queries, you can also DM me directly.

Also follow me for similar article, tips, and tricks ❤.

Make Subfolder A Git Submodule

Ever been in a situation where a sub-folder of your git repository needs to branch out as a new repository? Here in this article I’ve tried a new way using a python module to simplify the process. This steps is also recommended by the Git core team as ways to move a sub-folder to a new clean repository (link).


We’re just clones, sir. We’re meant to be expendable.

— Sinker.

Come on let’s jump in! 🚀

Prerequisites

First of all, you must have a Git1 on your machine. Second, must have existing test git repository and Python 32 installed.

If you don’t have Git yet, you can install git from its official sources, its available on all platforms even on android. Or if you have Visual Studio3 installed, just locate it from your drive. Python can also be installed using the Visual Studio installer.

So where do we start?

This will be our initial test repository structure:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

The first step you need to do is clone the test repository by either copying it by cp command or by creating a duplicate cloned copy using git clone.

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ test-repository-copy
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

Then install this python module named git-filter-repo. Install the module using the pip utility.

pip3 install git-filter-repo

This git-filter-repo simplifies the process of filtering files, directories and history. This tool as said on its Github page falls on the same category as git-filter-branch. You can check its Github repository page for pro’s and con’s against similar tools.

Next thing we do is go into the cloned test repository and filter the directory you want (in our case its the desired-directory) to separate into a new repository.

cd test-repository-copy
git filter-repo --path desired-directory --subdirectory-filter desired-directory

This will modify the cloned directory history and delete existing content that does not match the subdirectory filter. The new structure of the directory will be like this:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ desired-directory
  | |
  | +-+ contents

The desired-directory will now become its own repository retaining the history of files that is inside.

After moving the sub-folder to its own repository, we go back to our original test repository and delete the filtered directory.

cd test-repository
git rm -rf desired-directory

Still on the test-repository, create a new git submodule and link the filtered directory repository.

git submodule add ../desired-directory desired-directory

That’s all the steps needed, check if everything is working. Check the quick review below for a summarized setup.

Quick Review

Here are the simplified steps based on the above:

  1. Make a copy or clone the current project where the sub-folder is located.
  2. Install git-filter-repo using the command pip3 install git-filter-repo.
  3. On the cloned project folder, filter it base on the directory you want to make new repository with the command git filter-repo --path <new-path> --subdirectory-filter <filtered-directory>
  4. Go to your current project folder and delete the sub-folder using the command git rm -rf <filtered-directory>.
  5. On the current project create the sub-module using git submodule add <new-repo-url> <filtered-directory>.
  6. Check if everything is okay.

That’s all guys, always make a backup of your repository before proceeding. Anyways its on git version control system – you can go back and re-fix if there is something wrong.

Conclusion

There are many answers in the internet regarding this matter, but mostly they don’t explain what will occur when you run this command, this one I’ve personally tried it as before I was using a mono repository setup. But it became so large that its hard to maintain especially on cases of testing and checking the history.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. Git (/ɡɪt/) is a distributed version-control system for tracking changes in any set of files, originally designed for coordinating work among programmers cooperating on source code during software development. ↩︎
  2. Python is an interpreted, high-level and general-purpose programming language. Python’s design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. ↩︎
  3. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎

Simple Way To Determine Pluralization Of Package Name In Java And Rust

Ever been in a situation where you’re getting hard time naming the package of your code? For me its always, probably because I do have partial OCD/OCPD (Obsessive Compulsive Personality Disorder)1.

Anyways, I’ve found this answer before on StackExchange hopefully its still there and I’m using it ever since in naming my namespaces and packages.


It’s double the giggles and double the grins, and double the trouble if you’re blessed with twins.

— Anonymous.

Just in-case it might get deleted here is the full answer:


Use the plural for packages with homogeneous contents and the singular for packages with heterogeneous contents.

A class is similar to a database relation. A database relation should be named in the singular as its records are considered to be instances of the relation. The function of a relation is to compose a complex record from simple data.

A package, on the other hand, is not a data abstraction. It assists with organization of code and resolution of naming conflicts. If a package is named in the singular, it doesn’t mean that each member of the package is an instance of the package; it contains related but heterogeneous concepts. If it is named in the plural (as they often are), I would expect that the package contains homogeneous concepts.

For example, a type should be named TaskCollection instead of TasksCollection, as it is a collection containing instances of a Task. A package named com.myproject.task does not mean that each contained class is an instance of a task. There might be a TaskHandler, a TaskFactory, etc. A package named com.myproject.tasks, however, would contain different types that are all tasks: TakeOutGarbageTask, DoTheDishesTask, etc.


Here is the original answer link from Software Engineering StackExchange.
Should package names be singular or plural?

Check the original answer on stack exchange if there is an update.


  1. Obsessive–compulsive personality disorder (OCPD) is a cluster C personality disorder marked by an excessive need for orderliness, neatness, and perfectionism. Symptoms are usually present by the time a person reaches adulthood, and are visible in a variety of situations. ↩︎

Powershell Symbolic Links in Windows 10

Recently, I’ve been using more the Powershell1 prompt rather than the old command prompt2. Both command consoles can still be run on Windows 10, but on recent occasion I prefer the Powershell as you can use it to create more complex shell scripts on Windows and access some C# modules.


A chain is only as strong as its weakest link.

— Anonymous.

On my previous recent post about moving ProgramData to another drive, I’ve use the mklink utility to create junction directory to-and-from. So here are the equivalent commands:

Command Prompt SyntaxPowershell Equivalent Syntax
mklink Link TargetNew-Item -ItemType SymbolicLink -Name Link -Target Target
mklink /D Link TargetNew-Item -ItemType SymbolicLink -Name Link -Target Target
mklink /H Link TargetNew-Item -ItemType HardLink -Name Link -Target Target
mklink /J Link TargetNew-Item -ItemType Junction -Name Link -Target Target

The New-Item command is also analogous to Unix touch command tool.
Check the definition first of those commands before running on your system.
That’s all guys!

Leave a comment if you have questions and queries. Also you can DM me on twitter 😉.

💻


  1. PowerShell is a task automation and configuration management framework from Microsoft, consisting of a command-line shell and the associated scripting language. Initially a Windows component only, known as Windows PowerShell, it was made open-source and cross-platform on 18 August 2016 with the introduction of PowerShell Core. ↩︎
  2. The name refers to its executable filename. It is also commonly referred to as cmd or the Command Prompt, referring to the default window title on Windows. The implementations differ on the various systems but the behavior and basic set of commands is generally consistent. cmd.exe is the counterpart of COMMAND.COM in DOS and Windows 9x systems, and analogous to the Unix shells used on Unix-like systems. ↩︎

Identity Server 4 On Kubernetes Nginx Ingress

Have you ever tried deploying Identity Server 41 on a k8s (Kubernetes2) setup with Nginx3 ingress?

If you tried, I’m sure you’ve encountered some problems, as the current Nginx ingress is not properly configured for ASP.Net project or does not contain better optimization for Identity Server 4.


The first step towards getting somewhere is to decide you’re not going to stay where you are.

— J.P. Morgan.

Come on join me as we dive into the configurations!

Prerequisites

First of all, you must have a Kubernetes on your machine. Second, must have existing test bed project for Identity Server 4.

If you don’t have Kubernetes, perhaps you could try installing MicroK8s. The MicroK8s works on windows and MacOS.

So where do we start?

First, we modify the ingress ConfigMap configuration, and add the following lines:

proxy-buffer-size: "128k"  
proxy-buffers: "4 256k"  
proxy-busy-buffers-size: "256k"  
client-header-buffer-size: "64k"  
http2-max-field-size: "16k"  
http2-max-header-size: "128k"  
large-client-header-buffers: "8 64k"

This specific modifications allows Identity Server 4 to send and receive large header data which is needed to store and sort out JWT (JSON Web Token) identifiers. You can check this sample setup on my test ingress config map YAML (Yet Another Markup Language):

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-ingress-nginx-ingress
  namespace: default
  selfLink: /api/v1/namespaces/default/configmaps/nginx-ingress-nginx-ingress
  uid: 9fe8c06b-4f7c-4032-a938-505c308ed332
  resourceVersion: '10291469'
  creationTimestamp: '2020-09-18T12:46:50Z'
  labels:
    app.kubernetes.io/instance: nginx-ingress
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress-nginx-ingress
    helm.sh/chart: nginx-ingress-0.6.1
  annotations:
    meta.helm.sh/release-name: nginx-ingress
    meta.helm.sh/release-namespace: default
data:
  client-header-buffer-size: 64k
  http2-max-field-size: 16k
  http2-max-header-size: 128k
  keepalive-timeout: '65'
  large-client-header-buffers: 8 64k
  proxy-buffer-size: 128k
  proxy-buffers: 4 256k
  proxy-busy-buffers-size: 256k
  proxy-http-version: '1.1'
  proxy-read-timeout: '150'
  sendfile: 'on'
  use-http2: 'false'

Next, thing we do is adjust our code to forward headers from and to ingress-app. The other method calls are also recommended by docs from Microsoft, you can check the setup here.

public void ConfigureServices(IServiceCollection services)  
{
    // ... code omitted ...
    // Needed for load balancer to forward headers
    services.Configure<ForwardedHeadersOptions>(options =>
    {
        options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
        options.RequireHeaderSymmetry = false;
        options.KnownNetworks.Clear();
        options.KnownProxies.Clear();
});

The docs specified the known networks / proxies are needed if you are hosting C# apps in non-windows hosting environment.

After adding a forward headers configuration onto our ConfigureService method. We also need to add the forward headers middleware on the Configure method, can also be found in Startup.cs file.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // ... code omitted ...
    app.UseForwardedHeaders();
    // ... code omitted ...
}

Then after that, restart the Nginx ingress and also your app to test whether everything is working fine. The next change is optional if you are using TLS.

If your ingress setup is TLS4 terminated. You also need to add this on your Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // ... code omitted ...
    app.Use(async (ctx, next) =>
    {
       ctx.Request.Scheme = "https";
       await next();
    });
    // ... code omitted ...
}

This specific custom middleware specifically converts all incoming calls to secured HTTP scheme. The TLS ingress specifically does is redirect the calls from your RS (Resource Server) to AS (Authorization Server) which is Identity Server 4 but TLS needs consistent HTTP secured scheme. If you look into your openid-configuration it will return http:// only endpoints and that is the problem, and that’s why we are modifying it internally using a custom middleware.

After all is done, restart the service and test every knick and knacks.
That’s all guys!

Conclusion

It’s not just a simple clone image and deploy setup in k8s especially if you’re trying to deploy a c# app, sometimes you need to optimize some config in order for it to run smoothly /and or work well. Check the recommended deployment guide in Microsoft docs.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. IdentityServer is an OpenID Connect provider – it implements the OpenID Connect and OAuth 2.0 protocols. ↩︎
  2. Kubernetes is an open-source containerorchestration system for automating computer application deployment, scaling, and management. ↩︎
  3. Nginx (pronounced “engine X”, /ˌɛndʒɪnˈɛks/ EN-jin-EKS), stylized as NGINX, nginx or NginX, is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. ↩︎
  4. Transport Layer Security (TLS), and its now-deprecated predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communications security over a computer network. Several versions of the protocols are widely used in applications such as web browsing, email, instant messaging, and voice over IP (VoIP). Websites can use TLS to secure all communications between their servers and web browsers. ↩︎

Moving ProgramData Folders To Other Drive Using Windows 10

My C: drive became full, and it came to my mind that its hard to move files to a new SSD1 if I buy new one.

So it got me into thinking what are the things I can do to remove and free up space in my C: drive?


I don’t know where I’m going from here, but I promise it won’t be boring.

— David Bowie.

The first thing that comes up, is using the tool Disk Cleanup bundled with Windows 10. It only freed up 10Gb of data, then I check all the folder size which contains the largest amount of data.

The result was my user account and the ProgramData folder.
Here are the things I did in order to move ProgramData contents to my other spare drive.

DISCLAIMER: Before doing this on your machine please test and research first each command before executing on your machine / production environment.

First, I copied and mirrored the ProgramData folder structure and ACL’s2 using the command robocopy. The /MIR flag tells robocopy to retain security settings and state of file.

robocopy /XJ /MIR "C:\ProgramData" "D:\ProgramData"

You could also use this other command flags, this command is non-destructive unlike the mirror flag. The mirror flag deletes the file at destination while this just overwrites and retain if missing in source.

robocopy /xj /s /copyall C:\ProgramData D:\ProgramData

After everything’s done copying, you start creating junction links and symlinks3 from your spare drive (for me its the D: drive). The %~NA tells the batch command it will only get the base folder name, and the %~A gets the whole absolute path. The command below will only create directory junctions to begin with:

FOR /D %A IN ("D:\ProgramData\*") DO (MKLINK /J "C:\ProgramData\%~NA" "%~A")

This next command, specifically create symbolic links to file from source to destination.

FOR %A IN ("D:\ProgramData\*") DO (MKLINK "C:\ProgramData\%~NXA" "%~A")

Then after that restart your machine, and ensure everything’s working fine. I think some folders like Microsoft and Packages should be excluded in copying and making junctions.

That’s all guys. If you have any question DM me or comment in this post.


  1. A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage. It is also sometimes called a solid-state device or a solid-state disk, even though SSDs lack the physical spinning disks and movable read–write heads used in hard disk drives (HDDs) and floppy disks. ↩︎
  2. An access-control list (ACL) is a list of permissions associated with a system resource (object). An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. ↩︎
  3. A symbolic link (also symlink or soft link) is a term for any file that contains a reference to another file or directory in the form of an absolute or relative path and that affects pathname resolution. ↩︎

Creating A Cloudflare Worker Using Rust For Fetching Resources

To learn something new, you need to try new things and not be afraid to be wrong.

— Roy T. Bennett.

Have you ever worked on Web Assembly? Now is a good time, first because its already supported by your browser from Chromium based (e.g. Edge, Brave, Google Chrome) to Firefox and its enabled by default since 2019. Plus you can use your favorite language (technically not all) to develop web assembly.

In this quick tutorial we will be using Rust, not the game though.

Ferris, the Rust language mascot.

Ferris saying hello 👋.
Come on, let’s jump in! 💪

Prerequisites

First of all, you must have a Cloudflare account. Second, the Rust tool chain installed in your computer and I also assumed you are currently running Windows 10 or a Linux distro with proper environment set.

If you don’t have Rust, go to this link in order to install it. And for the Cloudflare account, basically just use the free tier which gives you 100,000 worker calls per day and a free Key-Value (KV) storage.

So where do we start?

The first thing we need to do, is to install wrangler which is a command-line tool specifically developed by Cloudflare to complement the deployment and development of Workers. Install the wrangler tool using the cargo command utility.

cargo install wrangler

The command above will first fetch the source from crates.io and compile it as binary. The command will also automatically install it on your ~/.cargo/bin directory.

💡: Cloudflare Worker is similar to AWS Lambda and Azure Cloud Function. They’re both under the serverless computing category.

After the installation of wrangler, you need to authenticate using your Cloudflare account API key, on which you can get on the user settings panel.

wrangler login

If all works well, the next thing we need to do is to generate the cargo project using the wrangler command line. Execute the code below to generate a cargo project using the Rust WASM worker template:

wrangler generate worker_fetch_demo https://github.com/cloudflare/rustwasm-worker-template.git --type="rust"

After that, go inside the folder named worker_fetch_demo to edit the file cargo.toml . Add the following crate dependencies.

cfg-if = "0.1.2"
wasm-bindgen = { version = "0.2", features = ["serde-serialize"] }
console_error_panic_hook = { version = "0.1.1", optional = true }
wee_alloc = { version = "0.4.2", optional = true }
futures = { version = "0.3", default-features = false }
js-sys = "0.3.45"
wasm-bindgen-futures = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_derive = "^1.0.59"
serde_json = "1.0"
log = "0.4"
console_log = { version = "0.2", optional = true }

The wasm-bindgen package is the most important, as that is what links the package to call to JavaScript scopes and other web and web assembly related functionalities. You also need to add the web-sys package as that will provide basic mapping and calls to JavaScript functions.

You’ll be able to get to know what the other package are for, if you’ve already read the Rust Programming Language Book.

[dependencies.web-sys]
version = "0.3.45"
features = [
  'Headers',
  'Request',
  'RequestInit',
  'Response',
  'ServiceWorkerGlobalScope',
]

After adding those crate dependencies it will automatically be fetched on build or upon call to cargo update .

Next thing we modify is the file worker > worker.js . This file serves as the main entry-point of our program that will call our compiled wasm files. We need to add minor modification to it, specifically capturing request and serving the wasm response as JSON.

async function handleRequest(request) {
  const { test } = wasm_bindgen;
  await wasm_bindgen(wasm);

  const data = await test();
  return new Response(JSON.stringify(data), {
    headers: {
      'Content-Type': 'application/json;charset=UTF-8',
    },
    status: 200,
  });
}

We move on now to the rust files. 🦀

On the file src > lib.rs add the following code, this particular instruction will add a basic initialization for our console log (similar to JavaScript console.log ) if the console_log dependency is present.

cfg_if! {
    if #[cfg(feature = "console_log")] {
        fn init_log() {
            console_log::init_with_level(Level::Trace).expect("error initializing log");
        }
    } else {
        fn init_log() {}
    }
}

Next, we add a function that will hook to js_sys to return the ServiceWorkerGlobalScope.

Specifically on Cloudflare, the normal browser fetch call won’t work, as the workers run on headless V8 JavaScript engine. That’s why we need to hook on internal HTTP client for service workers.

pub fn worker_global_scope() -> Option<web_sys::ServiceWorkerGlobalScope> {
    js_sys::global().dyn_into::<web_sys::ServiceWorkerGlobalScope>().ok()
}

After adding our worker_global_scope , we proceed with editing the greet function. First, rename it to run then add our first instruction to hook rust panic to console_error . Then call init_log to initialize basic logging functionality.

std::panic::set_hook(Box::new(console_error_panic_hook::hook));
init_log();

Then we initialize our request with the method GET, you could also use other HTTP methods (e.g. POST, PUT, DELETE, …). The method depends on your application needs and endpoints you want to call.

let mut opts = RequestInit::new();
opts.method("GET");

Next, will be creating the request payload that we will submit to our custom fetch. The instruction will contain the endpoint and request options.

let request = Request::new_with_str_and_init(
    "https://httpbin.org/get",
    &opts
)?;

After finishing that, we will now scope and call the function we created earlier. Then we wrap it in a future (asynchronous method calls similar to JavaScript promise if your much more familiar in that term) .

let global = worker_global_scope().unwrap();
let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

assert!(resp_value.is_instance_of::<Response>());
let resp: Response = resp_value.dyn_into().unwrap();
let json = JsFuture::from(resp.json()?).await?;

On the returned response, unwrap it and return its JSON value.

Here is our full wasm function that will be called on our worker.js that we defined earlier above.

#[wasm_bindgen]
pub async fn test() -> Result<JsValue, JsValue> {
    std::panic::set_hook(Box::new(console_error_panic_hook::hook));
    init_log();

    let mut opts = RequestInit::new();
    opts.method("GET");

    let request = Request::new_with_str_and_init(
        "https://httpbin.org/get",
        &opts
    )?;

    let global = worker_global_scope().unwrap();
    let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

    assert!(resp_value.is_instance_of::<Response>());
    let resp: Response = resp_value.dyn_into().unwrap();
    let json = JsFuture::from(resp.json()?).await?;

    Ok(json)
}

Now, we need to test it, to see if everything’s okay. Spin up a local server using the following command below.

wrangler dev

Test everything and try to call the URL returned by wrangler dev using Postman or Insomnia HTTP client. If everything is working fine, its now time to deploy the worker to live server.

wrangler publish

After running the command above, it will return a live worker URL which you can now access everywhere.

That’s all guys!

Conclusion

You can found the complete repository here.

This is not the only way to call fetch on Cloudflare worker rust, the other method involves in hooking directly to JavaScript exposed fetch (kindly look at Cloudflare example files). If you have any questions kindly leave a comment or DM me 😉.

Follow me for similar article, tips, and tricks ❤.

Top 10 NMAP Flags That I Use Daily

It is not the monsters we should be afraid of; it is the people that don’t recognize the same monsters inside of themselves.

— Shannon L. Alder.

If you’re a network IT (Information Technology) engineer or cybersecurity professional for sure you’d know about the tool nmap.

The tool nmap which stands for network mapper 1 is an open source tool for network discovery and is mostly use for security auditing. Been using this tool for many years and this are my favorite command line flags:

Skip reverse DNS call

This is a helpful flag specially if you don’t want that additional millisecond of fetching records from a DNS server. Or you have a specific case scenario that involves using only internal cached host file.

nmap -n scanme.nmap.org

Stop ping checks

The -PN flag specifically tells nmap that the host is online, skipping check if its alive through ping2. This is particularly useful in situation where you know the target is blocking all ICMP (Internet Control Message Protocol)3 in firewall.

nmap -PN scanme.nmap.org

Fingerprint scan

This -sV flag is useful specially in network auditing and determining if there are any ports available. The command will probe the target machine ports availability and guess the service (including the service version) that is running.

nmap -sV scanme.nmap.org

Finding live host

This command is specifically useful for network engineers to know if there are any alive host on the network. The notation below tells to scan the specific subnet4 using ICMP protocol and return the list of host that responded.

nmap -sP 192.168.1.1/24

Scan using specified network interface

If you have multiple NIC’s (Network Interface Controller)5 and you want to route the scan to a specific NIC, then this is the solution. Normally nmap or any other tool that utilize the computer network would use the OS designated network route (normally determined by network table and preferred gateway). The -e flag tells nmap to use that specific network controller to perform/resolve the scan.

nmap -e eth0 scanme.nmap.org

SYN ping scans

The SYN scan specifically tries to send request packets to target machine and check if it accepts the request packets. Mostly this is one of the default alternative ways of checking if the host is alive.

nmap -sP -PS scanme.nmap.org

ACK ping scans

The ACK scan is the opposite of SYN. In which this particular scan sends and ACK or (acknowledge) packet to the target machine if it will respond. Most modern firewalls block this if its not associated in a three way handshake.

nmap -sP -PA scanme.nmap.org

UDP port scans

This UDP6 port/ping scan is helpful when you know the target machine only blocks TCP packets. This specific flag sends a UDP packet to ports available on the machine and check’s if the target machine responds.

nmap -sP -PU scanme.nmap.org

IP (Internet Protocol) ping scans

Actually, this particular scan is special as its send IP packets to the specified IP protocol number in their IP header. It’s kinda special in a sense that if you didn’t supply a protocol type it will send multi-packets ICMP, IGMP, and IP-in-IP packet.

nmap -sP -PO scanme.nmap.org

ARP ping scans

This particular scan is mostly useful in LAN scenario. As you send an ARP packet it will return specific address or addresses that consumed the broadcast request.

nmap -sP -PR scanme.nmap.org

Mostly, that’s all. I’ve used other flags but this are my most used command flags for nmap.


  1. Nmap (Network Mapper) is a free and open-source network scanner created by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich). ↩︎
  2. Ping measures the round-trip time for messages sent from the originating host to a destination computer that are echoed back to the source. The name comes from active sonar terminology that sends a pulse of sound and listens for the echo to detect objects under water. ↩︎
  3. The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, an error is indicated when a requested service is not available or that a host or router could not be reached. ↩︎
  4. A subnetwork or subnet is a logical subdivision of an IP network. ↩︎
  5. A network interface controller (NIC, also known as a network interface card, network adapter, LAN adapter or physical network interface, and by similar terms) is a computer hardware component that connects a computer to a computer network. ↩︎
  6. The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network. Prior communications are not required in order to set up communication channels or data paths. ↩︎

Change Default Template In Inkscape For Windows 10

Recently, I’ve been doing some vector and animations, It’s for my new android app for the stock market. Every time I open Inkscape1, it always greeted me with the default template the next thing I do is change the document properties to the way I want, and it became a chore.


Every child is an artist. The problem is how to remain an artist once we grow up.

— Pablo Picasso.

So I searched the internet on how to basically set the default template to my liking.

Steps to change the default template

  1. Create a new document and set its document properties which can be found on File > Document Properties.
  2. It will show a simple dialog which is this.
  1. In the dialog do what to your liking. Modify it base on what you want to see everytime you open Inkscape.
  2. After that, do a File > Save As... and save it to your local user inkscape directory which will be in C:\Users\<your-user>\AppData\Roaming\Inkscape\template
  1. Save it as default.svg. Then restart Inkscape for changes to take effect.

Enjoy that’s all. 🍂


  1. Inkscape is a free and open-source vector graphics editor used to create vector images, primarily in Scalable Vector Graphics (SVG) format. Other formats can be imported and exported. ↩︎