Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Software Development

Find Unintended CSS Overflow

Ever coded an HTML static design site? I’m sure you’ve encountered problems where you’ve just installed some CSS framework then coded some additional CSS files. After you’ve finished the code you run it on a browser then TADAHHH!


You can only find truth with logic if you have already found truth without it.

— Gilbert Keith Chesterton.

There were overflows horizontal and vertical scrollbars. 😁

It’s a bit cumbersome to find where those scrollbar or overflow originated, so that’s what this article is for. In order to find those overflows, here’s a one liner that you can directly paste on your web developer console or just press F12 then the console tab. The console tab is the place where you look at javascript logs (console.log), anyways just copy this then analyze it first before pasting on your console.

javascript:void(function () {var docWidth = document.documentElement.offsetWidth;[].forEach.call(document.querySelectorAll('*'), function (el) {if (el.offsetWidth > docWidth) console.log(el)})})();

Here is the full script which had been beautified for readability:

var docWidth =  document.documentElement.offsetWidth;
[].forEach.call(document.querySelectorAll("*"), function(el) {
  if (el.offsetWidth > docWidth) {
    console.log(el);
  }
});

If you analyze carefully the code, you’ll see that on the first instruction it gets the current document offset width. Then it loops on all selector which are class and id’s, if it found an offending or greater than the offset width it will print that element attributes and location.

That’s all guys!

Let me know in the comments if you have questions or queries, you can also DM me directly.

Also follow me for similar article, tips, and tricks ❤.

Make Subfolder A Git Submodule

Ever been in a situation where a sub-folder of your git repository needs to branch out as a new repository? Here in this article I’ve tried a new way using a python module to simplify the process. This steps is also recommended by the Git core team as ways to move a sub-folder to a new clean repository (link).


We’re just clones, sir. We’re meant to be expendable.

— Sinker.

Come on let’s jump in! 🚀

Prerequisites

First of all, you must have a Git1 on your machine. Second, must have existing test git repository and Python 32 installed.

If you don’t have Git yet, you can install git from its official sources, its available on all platforms even on android. Or if you have Visual Studio3 installed, just locate it from your drive. Python can also be installed using the Visual Studio installer.

So where do we start?

This will be our initial test repository structure:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

The first step you need to do is clone the test repository by either copying it by cp command or by creating a duplicate cloned copy using git clone.

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ test-repository-copy
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory

Then install this python module named git-filter-repo. Install the module using the pip utility.

pip3 install git-filter-repo

This git-filter-repo simplifies the process of filtering files, directories and history. This tool as said on its Github page falls on the same category as git-filter-branch. You can check its Github repository page for pro’s and con’s against similar tools.

Next thing we do is go into the cloned test repository and filter the directory you want (in our case its the desired-directory) to separate into a new repository.

cd test-repository-copy
git filter-repo --path desired-directory --subdirectory-filter desired-directory

This will modify the cloned directory history and delete existing content that does not match the subdirectory filter. The new structure of the directory will be like this:

+-+ root
  |
  +-+ test-repository
  | |
  | +-+ desired-directory
  |   |
  |   +-+ contents
  | +-+ other-directory
  +-+ desired-directory
  | |
  | +-+ contents

The desired-directory will now become its own repository retaining the history of files that is inside.

After moving the sub-folder to its own repository, we go back to our original test repository and delete the filtered directory.

cd test-repository
git rm -rf desired-directory

Still on the test-repository, create a new git submodule and link the filtered directory repository.

git submodule add ../desired-directory desired-directory

That’s all the steps needed, check if everything is working. Check the quick review below for a summarized setup.

Quick Review

Here are the simplified steps based on the above:

  1. Make a copy or clone the current project where the sub-folder is located.
  2. Install git-filter-repo using the command pip3 install git-filter-repo.
  3. On the cloned project folder, filter it base on the directory you want to make new repository with the command git filter-repo --path <new-path> --subdirectory-filter <filtered-directory>
  4. Go to your current project folder and delete the sub-folder using the command git rm -rf <filtered-directory>.
  5. On the current project create the sub-module using git submodule add <new-repo-url> <filtered-directory>.
  6. Check if everything is okay.

That’s all guys, always make a backup of your repository before proceeding. Anyways its on git version control system – you can go back and re-fix if there is something wrong.

Conclusion

There are many answers in the internet regarding this matter, but mostly they don’t explain what will occur when you run this command, this one I’ve personally tried it as before I was using a mono repository setup. But it became so large that its hard to maintain especially on cases of testing and checking the history.

Let me know in the comments if you have questions or queries, you can also DM me directly.

Follow me for similar article, tips, and tricks ❤.


  1. Git (/ɡɪt/) is a distributed version-control system for tracking changes in any set of files, originally designed for coordinating work among programmers cooperating on source code during software development. ↩︎
  2. Python is an interpreted, high-level and general-purpose programming language. Python’s design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. ↩︎
  3. Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It is used to develop computer programs, as well as websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight. It can produce both native code and managed code. ↩︎

Simple Way To Determine Pluralization Of Package Name In Java And Rust

Ever been in a situation where you’re getting hard time naming the package of your code? For me its always, probably because I do have partial OCD/OCPD (Obsessive Compulsive Personality Disorder)1.

Anyways, I’ve found this answer before on StackExchange hopefully its still there and I’m using it ever since in naming my namespaces and packages.


It’s double the giggles and double the grins, and double the trouble if you’re blessed with twins.

— Anonymous.

Just in-case it might get deleted here is the full answer:


Use the plural for packages with homogeneous contents and the singular for packages with heterogeneous contents.

A class is similar to a database relation. A database relation should be named in the singular as its records are considered to be instances of the relation. The function of a relation is to compose a complex record from simple data.

A package, on the other hand, is not a data abstraction. It assists with organization of code and resolution of naming conflicts. If a package is named in the singular, it doesn’t mean that each member of the package is an instance of the package; it contains related but heterogeneous concepts. If it is named in the plural (as they often are), I would expect that the package contains homogeneous concepts.

For example, a type should be named TaskCollection instead of TasksCollection, as it is a collection containing instances of a Task. A package named com.myproject.task does not mean that each contained class is an instance of a task. There might be a TaskHandler, a TaskFactory, etc. A package named com.myproject.tasks, however, would contain different types that are all tasks: TakeOutGarbageTask, DoTheDishesTask, etc.


Here is the original answer link from Software Engineering StackExchange.
Should package names be singular or plural?

Check the original answer on stack exchange if there is an update.


  1. Obsessive–compulsive personality disorder (OCPD) is a cluster C personality disorder marked by an excessive need for orderliness, neatness, and perfectionism. Symptoms are usually present by the time a person reaches adulthood, and are visible in a variety of situations. ↩︎

Creating A Cloudflare Worker Using Rust For Fetching Resources

To learn something new, you need to try new things and not be afraid to be wrong.

— Roy T. Bennett.

Have you ever worked on Web Assembly? Now is a good time, first because its already supported by your browser from Chromium based (e.g. Edge, Brave, Google Chrome) to Firefox and its enabled by default since 2019. Plus you can use your favorite language (technically not all) to develop web assembly.

In this quick tutorial we will be using Rust, not the game though.

Ferris, the Rust language mascot.

Ferris saying hello 👋.
Come on, let’s jump in! 💪

Prerequisites

First of all, you must have a Cloudflare account. Second, the Rust tool chain installed in your computer and I also assumed you are currently running Windows 10 or a Linux distro with proper environment set.

If you don’t have Rust, go to this link in order to install it. And for the Cloudflare account, basically just use the free tier which gives you 100,000 worker calls per day and a free Key-Value (KV) storage.

So where do we start?

The first thing we need to do, is to install wrangler which is a command-line tool specifically developed by Cloudflare to complement the deployment and development of Workers. Install the wrangler tool using the cargo command utility.

cargo install wrangler

The command above will first fetch the source from crates.io and compile it as binary. The command will also automatically install it on your ~/.cargo/bin directory.

💡: Cloudflare Worker is similar to AWS Lambda and Azure Cloud Function. They’re both under the serverless computing category.

After the installation of wrangler, you need to authenticate using your Cloudflare account API key, on which you can get on the user settings panel.

wrangler login

If all works well, the next thing we need to do is to generate the cargo project using the wrangler command line. Execute the code below to generate a cargo project using the Rust WASM worker template:

wrangler generate worker_fetch_demo https://github.com/cloudflare/rustwasm-worker-template.git --type="rust"

After that, go inside the folder named worker_fetch_demo to edit the file cargo.toml . Add the following crate dependencies.

cfg-if = "0.1.2"
wasm-bindgen = { version = "0.2", features = ["serde-serialize"] }
console_error_panic_hook = { version = "0.1.1", optional = true }
wee_alloc = { version = "0.4.2", optional = true }
futures = { version = "0.3", default-features = false }
js-sys = "0.3.45"
wasm-bindgen-futures = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_derive = "^1.0.59"
serde_json = "1.0"
log = "0.4"
console_log = { version = "0.2", optional = true }

The wasm-bindgen package is the most important, as that is what links the package to call to JavaScript scopes and other web and web assembly related functionalities. You also need to add the web-sys package as that will provide basic mapping and calls to JavaScript functions.

You’ll be able to get to know what the other package are for, if you’ve already read the Rust Programming Language Book.

[dependencies.web-sys]
version = "0.3.45"
features = [
  'Headers',
  'Request',
  'RequestInit',
  'Response',
  'ServiceWorkerGlobalScope',
]

After adding those crate dependencies it will automatically be fetched on build or upon call to cargo update .

Next thing we modify is the file worker > worker.js . This file serves as the main entry-point of our program that will call our compiled wasm files. We need to add minor modification to it, specifically capturing request and serving the wasm response as JSON.

async function handleRequest(request) {
  const { test } = wasm_bindgen;
  await wasm_bindgen(wasm);

  const data = await test();
  return new Response(JSON.stringify(data), {
    headers: {
      'Content-Type': 'application/json;charset=UTF-8',
    },
    status: 200,
  });
}

We move on now to the rust files. 🦀

On the file src > lib.rs add the following code, this particular instruction will add a basic initialization for our console log (similar to JavaScript console.log ) if the console_log dependency is present.

cfg_if! {
    if #[cfg(feature = "console_log")] {
        fn init_log() {
            console_log::init_with_level(Level::Trace).expect("error initializing log");
        }
    } else {
        fn init_log() {}
    }
}

Next, we add a function that will hook to js_sys to return the ServiceWorkerGlobalScope.

Specifically on Cloudflare, the normal browser fetch call won’t work, as the workers run on headless V8 JavaScript engine. That’s why we need to hook on internal HTTP client for service workers.

pub fn worker_global_scope() -> Option<web_sys::ServiceWorkerGlobalScope> {
    js_sys::global().dyn_into::<web_sys::ServiceWorkerGlobalScope>().ok()
}

After adding our worker_global_scope , we proceed with editing the greet function. First, rename it to run then add our first instruction to hook rust panic to console_error . Then call init_log to initialize basic logging functionality.

std::panic::set_hook(Box::new(console_error_panic_hook::hook));
init_log();

Then we initialize our request with the method GET, you could also use other HTTP methods (e.g. POST, PUT, DELETE, …). The method depends on your application needs and endpoints you want to call.

let mut opts = RequestInit::new();
opts.method("GET");

Next, will be creating the request payload that we will submit to our custom fetch. The instruction will contain the endpoint and request options.

let request = Request::new_with_str_and_init(
    "https://httpbin.org/get",
    &opts
)?;

After finishing that, we will now scope and call the function we created earlier. Then we wrap it in a future (asynchronous method calls similar to JavaScript promise if your much more familiar in that term) .

let global = worker_global_scope().unwrap();
let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

assert!(resp_value.is_instance_of::<Response>());
let resp: Response = resp_value.dyn_into().unwrap();
let json = JsFuture::from(resp.json()?).await?;

On the returned response, unwrap it and return its JSON value.

Here is our full wasm function that will be called on our worker.js that we defined earlier above.

#[wasm_bindgen]
pub async fn test() -> Result<JsValue, JsValue> {
    std::panic::set_hook(Box::new(console_error_panic_hook::hook));
    init_log();

    let mut opts = RequestInit::new();
    opts.method("GET");

    let request = Request::new_with_str_and_init(
        "https://httpbin.org/get",
        &opts
    )?;

    let global = worker_global_scope().unwrap();
    let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

    assert!(resp_value.is_instance_of::<Response>());
    let resp: Response = resp_value.dyn_into().unwrap();
    let json = JsFuture::from(resp.json()?).await?;

    Ok(json)
}

Now, we need to test it, to see if everything’s okay. Spin up a local server using the following command below.

wrangler dev

Test everything and try to call the URL returned by wrangler dev using Postman or Insomnia HTTP client. If everything is working fine, its now time to deploy the worker to live server.

wrangler publish

After running the command above, it will return a live worker URL which you can now access everywhere.

That’s all guys!

Conclusion

You can found the complete repository here.

This is not the only way to call fetch on Cloudflare worker rust, the other method involves in hooking directly to JavaScript exposed fetch (kindly look at Cloudflare example files). If you have any questions kindly leave a comment or DM me 😉.

Follow me for similar article, tips, and tricks ❤.

Change Default Template In Inkscape For Windows 10

Recently, I’ve been doing some vector and animations, It’s for my new android app for the stock market. Every time I open Inkscape1, it always greeted me with the default template the next thing I do is change the document properties to the way I want, and it became a chore.


Every child is an artist. The problem is how to remain an artist once we grow up.

— Pablo Picasso.

So I searched the internet on how to basically set the default template to my liking.

Steps to change the default template

  1. Create a new document and set its document properties which can be found on File > Document Properties.
  2. It will show a simple dialog which is this.
  1. In the dialog do what to your liking. Modify it base on what you want to see everytime you open Inkscape.
  2. After that, do a File > Save As... and save it to your local user inkscape directory which will be in C:\Users\<your-user>\AppData\Roaming\Inkscape\template
  1. Save it as default.svg. Then restart Inkscape for changes to take effect.

Enjoy that’s all. 🍂


  1. Inkscape is a free and open-source vector graphics editor used to create vector images, primarily in Scalable Vector Graphics (SVG) format. Other formats can be imported and exported. ↩︎

Sending Email Using MailKit in ASP.NET Core Web API

You do not need to know precisely what is happening, or exactly where it is all going. What you need is to recognize the possibilities and challenges offered by the present moment, and to embrace them with courage, faith and hope.

— Thomas Merton.

Hey guys, recently I’ve been working on an ASP.NET Core project that needed email services to send reports. And I’ve been dumbfounded by some tutorial on how they implemented the email service functionality. Some are over complicated while others were over simplified.

So here I am creating yet another tutorial for sending email using MailKit and .NET Core.

Let’s jump in!

Prerequisites

First of all, you must have a .NET Core 3.1 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or some Linux with proper environment set.

And MailKit on which this package becomes a de-facto standard in sending emails as its being preferred and recommended by Microsoft in their tutorials over the standard System.Mail.Net .

So where do we start?

First we create our ASP.NET Web API project on the command-line. Execute the command below to create the project.

dotnet new webapi --name MyProject

The dotnet new command creates a project folder base on the declared template on which in our case is webapi . The --name flag indicates that the next argument would be its output path and project name.

After that go inside the project root folder. Then we add a reference to MailKit nuget package to project. If the project already reference the MailKit then try running dotnet restore to get and update the reference assemblies locally.

dotnet add package MailKit

Then we create and add the SMTP ( Simple Mail Transfer Protocol) settings to the appsettings.json and appsettings.Development.json . In the example below I’ve used Gmail setup, just fill it up with your own account settings and be sure to use an app password in the password field.

If you have a custom SMTP server just replace the port and server as well as other needed fields.

"SmtpSettings": {
  "Server": "smtp.gmail.com",
  "Port": 587,
  "SenderName": "My Name",
  "SenderEmail": "<my-account-user>@gmail.com",
  "Username": "<my-account-user>@gmail.com",
  "Password": "<my-account-password>"
}

Create an entity structure which will store the SMTP settings. This structure will receive the settings we setup above in the appsettings.json .

namespace MyProject.Entities
{
    public class SmtpSettings
    {
        public string Server { get; set; }
        public int Port { get; set; }
        public string SenderName { get; set; }
        public string SenderEmail { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
    }
}

Next is we setup the mailer interface to provide to our controllers. This IMailer interface exposes one method just to send email asynchronously. You can add more methods but I feel one is enough.

public interface IMailer
{
    Task SendEmailAsync(string email, string subject, string body);
}

Implement the mailer class structure with basic defaults, and after that try and build it, check if there are any error. Check if linting provides any warning or programming mistakes.

public class Mailer : IMailer
{
    public async Task SendEmailAsync(string email, string subject, string body)
    {
        await Task.Completed;
    }
}

If all things implemented properly, we create the template sender functionality. This method will accept recipient email, the subject of the email and the message body.

using MailKit.Net.Smtp;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Options;
using MimeKit;
using System;
using System.Threading.Tasks;
using MyProject.Entities;

namespace MyProject.Services
{
    public interface IMailer
    {
        Task SendEmailAsync(string email, string subject, string body);
    }

    public class Mailer : IMailer
    {
        private readonly SmtpSettings _smtpSettings;
        private readonly IWebHostEnvironment _env;

        public Mailer(IOptions<SmtpSettings> smtpSettings, IWebHostEnvironment env)
        {
            _smtpSettings = smtpSettings.Value;
            _env = env;
        }

        public async Task SendEmailAsync(string email, string subject, string body)
        {
            try
            {
                var message = new MimeMessage();
                message.From.Add(new MailboxAddress(_smtpSettings.SenderName, _smtpSettings.SenderEmail));
                message.To.Add(new MailboxAddress(email));
                message.Subject = subject;
                message.Body = new TextPart("html")
                {
                    Text = body
                };

                using (var client = new SmtpClient())
                {
                    client.ServerCertificateValidationCallback = (s, c, h, e) => true;

                    if (_env.IsDevelopment())
                    {
                        await client.ConnectAsync(_smtpSettings.Server, _smtpSettings.Port, true);
                    }
                    else
                    {
                        await client.ConnectAsync(_smtpSettings.Server);
                    }

                    await client.AuthenticateAsync(_smtpSettings.Username, _smtpSettings.Password);
                    await client.SendAsync(message);
                    await client.DisconnectAsync(true);
                }
            }
            catch (Exception e)
            {
                throw new InvalidOperationException(e.Message);
            }
        }
    }
}

In the source above we create first a MimeMessage which contains all the needed data for an email body and header, it contains MAIL FROM, RCPT TO , and DATA .

After that we setup SMTP client with the fields we setup in our appsettings.json . The client.AuthenticateAsync can be omitted if the SMTP server doesn’t have an authentication flow.

When everything is done in Mailer, we now edit the Startup.cs file in project root folder. We then insert SMTP settings parser and initialize a singleton object that will handle mail service in ConfigureServices .

services.Configure<SmtpSettings>(Configuration.GetSection("SmtpSettings"));
services.AddSingleton<IMailer, Mailer>();

After setting up the services in startup, we head onto the WeatherForecastController.cs which is included when we bootstrap the project. This files are part of the webapi template, you can use your own custom controller function to call on the IMailer interface.

private readonly IMailer _mailer;

public WeatherForecastController(ILogger<WeatherForecastController> logger, IMailer mailer)
{
    _logger = logger;
    _mailer = mailer;
}

Look on how we add the IMailer mailer variable as this becomes available for us when we did setup and add a singleton object in our startup. We then store the variable in our private variable for future usage.

We also create another method to handle new route /export for sending temporary weather report. Change it according to your own setup.

[HttpGet]
[Route("export")]
public async Task<IActionResult> ExportWeatherReport()
{
    await _mailer.SendEmailAsync("[email protected]", "Weather Report", "Detailed Weather Report");
    return NoContent();
}

In the code above we simply insert the mailer and called our exposed method SendEmailAsync . Check the full source below for details on what to packages needed to import on the module.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using MyProject.Services;

namespace MyProject.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class WeatherForecastController : ControllerBase
    {
        private static readonly string[] Summaries = new[]
        {
            "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
        };

        private readonly ILogger<WeatherForecastController> _logger;
        private readonly IMailer _mailer;

        public WeatherForecastController(ILogger<WeatherForecastController> logger, IMailer mailer)
        {
            _logger = logger;
            _mailer = mailer;
        }

        [HttpGet]
        public IEnumerable<WeatherForecast> Get()
        {
            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }

        [HttpGet]
        [Route("export")]
        public async Task<IActionResult> ExportWeatherReport()
        {
            await _mailer.SendEmailAsync("[email protected]", "Weather Report", "Detailed Weather Report");
            return NoContent();
        }
    }
}

When everything’s done we build and test the web API project. Execute the code below to check if there are any errors.

dotnet build

Then deploy or publish it on IIS (Internet Information Services), or rather just run it in isolated form which you can use dotnet run .

Conclusion

If you’re doing an email service always consider to make it as simple as possible to avoid any unintended bugs. Sending emails has never been easier this time around and you don’t need complicated flows as we switch to MailKit.

You can found the complete repository here .

Follow me for similar article, tips, and tricks ❤.

Gimp Automating Image Processing with Python Fu

What a large volume of adventures may be grasped within the span of his little life by him who interests his heart in everything.

— Laurence Sterne.

Hi guys, one night I decided to create an online store and sell some drop ship products. I grab some pictures from the wholesale seller and planned to customize those images (just to put some store branding).

There were a hundred images that I want to customize, by hand it would take ages so I decided to create a batch script. My first thought was to use Gimp (an open source image manipulator) and Script-Fu. After trying out what the result would be on Python-Fu console, I settled with this. It was a simple design but I was satisfied.

Here is the script that I use. Try it out on Python-Fu console and call it convert_to_poster(n), where n is the image number in the image list inside Gimp. You could for loop it for faster batch processing.

First we set the variables.

current_image = gimp.image_list()[n]
center_x = current_image.width / 2
center_y = current_image.height / 2
c_size = 600
c_size_d = c_size / 2 

Then we resize the image / canvas.

pdb.gimp_image_resize(current_image, c_size, c_size, c_size_d - center_x, c_size_d - center_y)

We create an empty background layer and just fill it with white.

bg_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "bg", 100, LAYER_MODE_NORMAL)
pdb.gimp_drawable_fill(bg_layer, FILL_WHITE)
pdb.gimp_image_add_layer(current_image, bg_layer, 1)

We will also set the main background and foreground color that would be use in future instruction set.

pdb.gimp_context_set_foreground("#960acc")
pdb.gimp_context_set_background("#000000")

Create branding layer and fill the rectangular region with indigo.

branding_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "branding", 100, LAYER_MODE_NORMAL)
pdb.gimp_image_add_layer(current_image, branding_layer, 1)

pdb.gimp_selection_none(current_image)
pdb.gimp_image_select_rectangle(current_image, CHANNEL_OP_ADD, 0, 0, c_size, 260)
pdb.gimp_drawable_edit_fill(branding_layer, FILL_FOREGROUND)

Get poster image and resize its layer to fit canvas.

poster_layer = current_image.layers[0]
pdb.gimp_layer_add_alpha(poster_layer)
pdb.gimp_layer_resize_to_image_size(poster_layer)

Select non-alpha on poster layer and grow.

pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
pdb.gimp_selection_grow(current_image, 3)

Create a mask and fill it with the background color that we set.

mask = pdb.gimp_layer_create_mask(branding_layer, ADD_MASK_WHITE)
pdb.gimp_layer_add_mask(branding_layer, mask)
pdb.gimp_layer_set_edit_mask(branding_layer, 1)
pdb.gimp_drawable_edit_fill(mask, FILL_BACKGROUND)

Add text node with color it white.

pdb.gimp_context_set_foreground("#ffffff")
text_layer = pdb.gimp_text_fontname(current_image, None, 10.0, 10.0, "Psyche Digital", 0, 1, 24.0, PIXELS, "SF Compact Display Heavy")

We select the main layer and add a legacy drop shadow.

pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
pdb.script_fu_drop_shadow(current_image, poster_layer, 10.0, 10.0, 10.0, (0, 0, 0, 255), 80, 0)

Here is the full script code:

def convert_to_poster(n):
    current_image = gimp.image_list()[n]
    center_x = current_image.width / 2
    center_y = current_image.height / 2
    c_size = 600
    c_size_d = c_size / 2 
    
    pdb.gimp_image_resize(current_image, c_size, c_size, c_size_d - center_x, c_size_d - center_y)

    bg_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "bg", 100, LAYER_MODE_NORMAL)
    pdb.gimp_drawable_fill(bg_layer, FILL_WHITE)
    pdb.gimp_image_add_layer(current_image, bg_layer, 1)

    pdb.gimp_context_set_foreground("#960acc")
    pdb.gimp_context_set_background("#000000")
    
    branding_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "branding", 100, LAYER_MODE_NORMAL)
    pdb.gimp_image_add_layer(current_image, branding_layer, 1)

    pdb.gimp_selection_none(current_image)
    pdb.gimp_image_select_rectangle(current_image, CHANNEL_OP_ADD, 0, 0, c_size, 260)
    pdb.gimp_drawable_edit_fill(branding_layer, FILL_FOREGROUND)

    poster_layer = current_image.layers[0]
    pdb.gimp_layer_add_alpha(poster_layer)
    pdb.gimp_layer_resize_to_image_size(poster_layer)

    pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
    pdb.gimp_selection_grow(current_image, 3)

    mask = pdb.gimp_layer_create_mask(branding_layer, ADD_MASK_WHITE)
    pdb.gimp_layer_add_mask(branding_layer, mask)
    pdb.gimp_layer_set_edit_mask(branding_layer, 1)
    pdb.gimp_drawable_edit_fill(mask, FILL_BACKGROUND)

    pdb.gimp_context_set_foreground("#ffffff")
    text_layer = pdb.gimp_text_fontname(current_image, None, 10.0, 10.0, "Psyche Digital", 0, 1, 24.0, PIXELS, "SF Compact Display Heavy")

    pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
    pdb.script_fu_drop_shadow(current_image, poster_layer, 10.0, 10.0, 10.0, (0, 0, 0, 255), 80, 0)

Run this code in Gimp console or in Script-Fu console.

🥰😝😜😁😎

Leave a like and subscribe for more hacks, tips, and tricks.

Using Vim Hex Editor To View Keyboard Key Hex Code

The best way to predict the future is to create it.

— Anonymous.

In this TIL (Today I Learned), we will review a way on how to view keyboard key hex code. As I modify my iTerm2 (a popular terminal emulator for macOS) key shortcuts to map my tmux Ctrl + b keys, I wonder how to get the keyboard key hex codes easily.

Then I remembered that there is the xxd (a command line hex viewer and editor which is part of the vim package) command which can process keys and convert them to hex code.

To start off, we run xxd from the terminal. It will wait for a read line. Execute your keystrokes (e.g. Ctrlb) then press enter to create a new line. After the new line add EOF (End Of File) which would corresponds to the keyboard keys Ctrl + d. After doing the process above xxd would output a hex representation of the keyboard key code that you desire.1

Another trick using xxd command is to reverse hex string like this.

echo <hex code> | xxd -revert -plain | rev | tr -d '\n' | xxd -plain

An example hex code would be 030201. That would output a reverse 010203. The rev command will reverse the output while the tr would trim newline.


  1. https://stackoverflow.com/questions/36321230/finding-the-hex-code-sequence-for-a-key-combination ↩︎

Checkout Specific Directory Within Git Repo

I believe that the first test of a truly great man is his humility. Really great men have a curious feeling that the greatness is not in them but through them. And they see something divine in every other man and are endlessly, incredibly merciful.

— John Ruskin.

One day I was working on a driver port to macOS (Apple Macintosh OS) and the only opensource code for it can be found on Linux kernel.

Heck! The Linux kernel repository is around 2GB including all history and I only needed a specific directory inside the repository. After searching the whole internet I found an answer1.

Here are the steps to clone a specific directory from a git repository:

  1. First and foremost you need to create a local blank repository on your workstation. git init <repo-url>
  2. Inside the created bare repository, map the remote URL of the remote repository you want to clone. cd <repo-name> git remote add origin <remote-repo-url>
  3. Then, setup the git config and specify that you’ll be doing a sparse checkout. git config core.sparsecheckout true
  4. Create and add all the directories you want to checkout in the sparse-checkout file that can be found in .git/info/sparse-checkout. echo "<needed-directory>/*" >> .git/info/sparse-checkout
  5. When all the above steps is done, finally pull the repository objects. git pull --depth=1 origin master

So guys if you have any questions? hit me up on my social media accounts. That’s all there is that is needed. Now its already cloned and can now be worked on.

❌ Originally posted on August 5, 2019.


  1. https://stackoverflow.com/a/28039894 ↩︎

C# .NET Projects Can Be Compiled and Run in MacOS or Linux

My primary goal of hacking was the intellectual curiosity, the seduction of adventure.

— Kevin Mitnick.

Before I never thought that a .NET solution project can be compiled and run on Linux. But as I’ve checked the GitHub of dotnet-core, I found there were many ways to do it.

First is through Mono, which is a compatible open source alternative to the .NET Framework (the latter is a proprietary of Microsoft). You can create WPF (Windows Presentation Foundation) forms using it and other UI intensive .NET projects. Mono is sponsored by Microsoft, but it is unofficially supported.

The other solution is, if your working on a .NET core project you’ll be using a dotnet-core. Microsoft published last 2014 an open source .NET SDK (Software Development Kit) (bare bones) which was derived from ASP.NET, they’ve called it dotnet-core. Basically, it is a stripped down version of .NET framework without all the heavy UI and forms. The project itself is modular and can be compiled in different platforms.

So if you’re planning to use C# or a .NET dependent language, don’t be afraid as it can be run and created using different platforms.

If you’re planning to install the package on Arch here is the command:

pacman -Sy dotnet-core

Or check the mono flavor:

pacman -Sy mono

That’s it guys, a brand new knowledge for me. Probably on my next project I’ll try to use .NET Core. 🤔 Hope you guys, enjoyed this article and as always live life.