Where It All Started.

Where It All Started.

Life, Stock Trading, Investments, Business and Startup. Most are programming stuff.

Category: Software Development

Creating A Cloudflare Worker Using Rust For Fetching Resources

To learn something new, you need to try new things and not be afraid to be wrong.

— Roy T. Bennett.

Have you ever worked on Web Assembly? Now is a good time, first because its already supported by your browser from Chromium based (e.g. Edge, Brave, Google Chrome) to Firefox and its enabled by default since 2019. Plus you can use your favorite language (technically not all) to develop web assembly.

In this quick tutorial we will be using Rust, not the game though.

Ferris, the Rust language mascot.

Ferris saying hello 👋.
Come on, let’s jump in! 💪

Prerequisites

First of all, you must have a Cloudflare account. Second, the Rust tool chain installed in your computer and I also assumed you are currently running Windows 10 or a Linux distro with proper environment set.

If you don’t have Rust, go to this link in order to install it. And for the Cloudflare account, basically just use the free tier which gives you 100,000 worker calls per day and a free Key-Value (KV) storage.

So where do we start?

The first thing we need to do, is to install wrangler which is a command-line tool specifically developed by Cloudflare to complement the deployment and development of Workers. Install the wrangler tool using the cargo command utility.

cargo install wrangler

The command above will first fetch the source from crates.io and compile it as binary. The command will also automatically install it on your ~/.cargo/bin directory.

💡: Cloudflare Worker is similar to AWS Lambda and Azure Cloud Function. They’re both under the serverless computing category.

After the installation of wrangler, you need to authenticate using your Cloudflare account API key, on which you can get on the user settings panel.

wrangler login

If all works well, the next thing we need to do is to generate the cargo project using the wrangler command line. Execute the code below to generate a cargo project using the Rust WASM worker template:

wrangler generate worker_fetch_demo https://github.com/cloudflare/rustwasm-worker-template.git --type="rust"

After that, go inside the folder named worker_fetch_demo to edit the file cargo.toml . Add the following crate dependencies.

cfg-if = "0.1.2"
wasm-bindgen = { version = "0.2", features = ["serde-serialize"] }
console_error_panic_hook = { version = "0.1.1", optional = true }
wee_alloc = { version = "0.4.2", optional = true }
futures = { version = "0.3", default-features = false }
js-sys = "0.3.45"
wasm-bindgen-futures = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_derive = "^1.0.59"
serde_json = "1.0"
log = "0.4"
console_log = { version = "0.2", optional = true }

The wasm-bindgen package is the most important, as that is what links the package to call to JavaScript scopes and other web and web assembly related functionalities. You also need to add the web-sys package as that will provide basic mapping and calls to JavaScript functions.

You’ll be able to get to know what the other package are for, if you’ve already read the Rust Programming Language Book.

[dependencies.web-sys]
version = "0.3.45"
features = [
  'Headers',
  'Request',
  'RequestInit',
  'Response',
  'ServiceWorkerGlobalScope',
]

After adding those crate dependencies it will automatically be fetched on build or upon call to cargo update .

Next thing we modify is the file worker > worker.js . This file serves as the main entry-point of our program that will call our compiled wasm files. We need to add minor modification to it, specifically capturing request and serving the wasm response as JSON.

async function handleRequest(request) {
  const { test } = wasm_bindgen;
  await wasm_bindgen(wasm);

  const data = await test();
  return new Response(JSON.stringify(data), {
    headers: {
      'Content-Type': 'application/json;charset=UTF-8',
    },
    status: 200,
  });
}

We move on now to the rust files. 🦀

On the file src > lib.rs add the following code, this particular instruction will add a basic initialization for our console log (similar to JavaScript console.log ) if the console_log dependency is present.

cfg_if! {
    if #[cfg(feature = "console_log")] {
        fn init_log() {
            console_log::init_with_level(Level::Trace).expect("error initializing log");
        }
    } else {
        fn init_log() {}
    }
}

Next, we add a function that will hook to js_sys to return the ServiceWorkerGlobalScope.

Specifically on Cloudflare, the normal browser fetch call won’t work, as the workers run on headless V8 JavaScript engine. That’s why we need to hook on internal HTTP client for service workers.

pub fn worker_global_scope() -> Option<web_sys::ServiceWorkerGlobalScope> {
    js_sys::global().dyn_into::<web_sys::ServiceWorkerGlobalScope>().ok()
}

After adding our worker_global_scope , we proceed with editing the greet function. First, rename it to run then add our first instruction to hook rust panic to console_error . Then call init_log to initialize basic logging functionality.

std::panic::set_hook(Box::new(console_error_panic_hook::hook));
init_log();

Then we initialize our request with the method GET, you could also use other HTTP methods (e.g. POST, PUT, DELETE, …). The method depends on your application needs and endpoints you want to call.

let mut opts = RequestInit::new();
opts.method("GET");

Next, will be creating the request payload that we will submit to our custom fetch. The instruction will contain the endpoint and request options.

let request = Request::new_with_str_and_init(
    "https://httpbin.org/get",
    &opts
)?;

After finishing that, we will now scope and call the function we created earlier. Then we wrap it in a future (asynchronous method calls similar to JavaScript promise if your much more familiar in that term) .

let global = worker_global_scope().unwrap();
let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

assert!(resp_value.is_instance_of::<Response>());
let resp: Response = resp_value.dyn_into().unwrap();
let json = JsFuture::from(resp.json()?).await?;

On the returned response, unwrap it and return its JSON value.

Here is our full wasm function that will be called on our worker.js that we defined earlier above.

#[wasm_bindgen]
pub async fn test() -> Result<JsValue, JsValue> {
    std::panic::set_hook(Box::new(console_error_panic_hook::hook));
    init_log();

    let mut opts = RequestInit::new();
    opts.method("GET");

    let request = Request::new_with_str_and_init(
        "https://httpbin.org/get",
        &opts
    )?;

    let global = worker_global_scope().unwrap();
    let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;

    assert!(resp_value.is_instance_of::<Response>());
    let resp: Response = resp_value.dyn_into().unwrap();
    let json = JsFuture::from(resp.json()?).await?;

    Ok(json)
}

Now, we need to test it, to see if everything’s okay. Spin up a local server using the following command below.

wrangler dev

Test everything and try to call the URL returned by wrangler dev using Postman or Insomnia HTTP client. If everything is working fine, its now time to deploy the worker to live server.

wrangler publish

After running the command above, it will return a live worker URL which you can now access everywhere.

That’s all guys!

Conclusion

You can found the complete repository here.

This is not the only way to call fetch on Cloudflare worker rust, the other method involves in hooking directly to JavaScript exposed fetch (kindly look at Cloudflare example files). If you have any questions kindly leave a comment or DM me 😉.

Follow me for similar article, tips, and tricks ❤.

Change Default Template In Inkscape For Windows 10

Recently, I’ve been doing some vector and animations, It’s for my new android app for the stock market. Every time I open Inkscape1, it always greeted me with the default template the next thing I do is change the document properties to the way I want, and it became a chore.


Every child is an artist. The problem is how to remain an artist once we grow up.

— Pablo Picasso.

So I searched the internet on how to basically set the default template to my liking.

Steps to change the default template

  1. Create a new document and set its document properties which can be found on File > Document Properties.
  2. It will show a simple dialog which is this.
  1. In the dialog do what to your liking. Modify it base on what you want to see everytime you open Inkscape.
  2. After that, do a File > Save As... and save it to your local user inkscape directory which will be in C:\Users\<your-user>\AppData\Roaming\Inkscape\template
  1. Save it as default.svg. Then restart Inkscape for changes to take effect.

Enjoy that’s all. 🍂


  1. Inkscape is a free and open-source vector graphics editor used to create vector images, primarily in Scalable Vector Graphics (SVG) format. Other formats can be imported and exported. ↩︎

Sending Email Using MailKit in ASP.NET Core Web API

You do not need to know precisely what is happening, or exactly where it is all going. What you need is to recognize the possibilities and challenges offered by the present moment, and to embrace them with courage, faith and hope.

— Thomas Merton.

Hey guys, recently I’ve been working on an ASP.NET Core project that needed email services to send reports. And I’ve been dumbfounded by some tutorial on how they implemented the email service functionality. Some are over complicated while others were over simplified.

So here I am creating yet another tutorial for sending email using MailKit and .NET Core.

Let’s jump in!

Prerequisites

First of all, you must have a .NET Core 3.1 SDK (Software Development Kit) installed in your computer and also I assumed you are currently running Windows 10 or some Linux with proper environment set.

And MailKit on which this package becomes a de-facto standard in sending emails as its being preferred and recommended by Microsoft in their tutorials over the standard System.Mail.Net .

So where do we start?

First we create our ASP.NET Web API project on the command-line. Execute the command below to create the project.

dotnet new webapi --name MyProject

The dotnet new command creates a project folder base on the declared template on which in our case is webapi . The --name flag indicates that the next argument would be its output path and project name.

After that go inside the project root folder. Then we add a reference to MailKit nuget package to project. If the project already reference the MailKit then try running dotnet restore to get and update the reference assemblies locally.

dotnet add package MailKit

Then we create and add the SMTP ( Simple Mail Transfer Protocol) settings to the appsettings.json and appsettings.Development.json . In the example below I’ve used Gmail setup, just fill it up with your own account settings and be sure to use an app password in the password field.

If you have a custom SMTP server just replace the port and server as well as other needed fields.

"SmtpSettings": {
  "Server": "smtp.gmail.com",
  "Port": 587,
  "SenderName": "My Name",
  "SenderEmail": "<my-account-user>@gmail.com",
  "Username": "<my-account-user>@gmail.com",
  "Password": "<my-account-password>"
}

Create an entity structure which will store the SMTP settings. This structure will receive the settings we setup above in the appsettings.json .

namespace MyProject.Entities
{
    public class SmtpSettings
    {
        public string Server { get; set; }
        public int Port { get; set; }
        public string SenderName { get; set; }
        public string SenderEmail { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
    }
}

Next is we setup the mailer interface to provide to our controllers. This IMailer interface exposes one method just to send email asynchronously. You can add more methods but I feel one is enough.

public interface IMailer
{
    Task SendEmailAsync(string email, string subject, string body);
}

Implement the mailer class structure with basic defaults, and after that try and build it, check if there are any error. Check if linting provides any warning or programming mistakes.

public class Mailer : IMailer
{
    public async Task SendEmailAsync(string email, string subject, string body)
    {
        await Task.Completed;
    }
}

If all things implemented properly, we create the template sender functionality. This method will accept recipient email, the subject of the email and the message body.

using MailKit.Net.Smtp;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Options;
using MimeKit;
using System;
using System.Threading.Tasks;
using MyProject.Entities;

namespace MyProject.Services
{
    public interface IMailer
    {
        Task SendEmailAsync(string email, string subject, string body);
    }

    public class Mailer : IMailer
    {
        private readonly SmtpSettings _smtpSettings;
        private readonly IWebHostEnvironment _env;

        public Mailer(IOptions<SmtpSettings> smtpSettings, IWebHostEnvironment env)
        {
            _smtpSettings = smtpSettings.Value;
            _env = env;
        }

        public async Task SendEmailAsync(string email, string subject, string body)
        {
            try
            {
                var message = new MimeMessage();
                message.From.Add(new MailboxAddress(_smtpSettings.SenderName, _smtpSettings.SenderEmail));
                message.To.Add(new MailboxAddress(email));
                message.Subject = subject;
                message.Body = new TextPart("html")
                {
                    Text = body
                };

                using (var client = new SmtpClient())
                {
                    client.ServerCertificateValidationCallback = (s, c, h, e) => true;

                    if (_env.IsDevelopment())
                    {
                        await client.ConnectAsync(_smtpSettings.Server, _smtpSettings.Port, true);
                    }
                    else
                    {
                        await client.ConnectAsync(_smtpSettings.Server);
                    }

                    await client.AuthenticateAsync(_smtpSettings.Username, _smtpSettings.Password);
                    await client.SendAsync(message);
                    await client.DisconnectAsync(true);
                }
            }
            catch (Exception e)
            {
                throw new InvalidOperationException(e.Message);
            }
        }
    }
}

In the source above we create first a MimeMessage which contains all the needed data for an email body and header, it contains MAIL FROM, RCPT TO , and DATA .

After that we setup SMTP client with the fields we setup in our appsettings.json . The client.AuthenticateAsync can be omitted if the SMTP server doesn’t have an authentication flow.

When everything is done in Mailer, we now edit the Startup.cs file in project root folder. We then insert SMTP settings parser and initialize a singleton object that will handle mail service in ConfigureServices .

services.Configure<SmtpSettings>(Configuration.GetSection("SmtpSettings"));
services.AddSingleton<IMailer, Mailer>();

After setting up the services in startup, we head onto the WeatherForecastController.cs which is included when we bootstrap the project. This files are part of the webapi template, you can use your own custom controller function to call on the IMailer interface.

private readonly IMailer _mailer;

public WeatherForecastController(ILogger<WeatherForecastController> logger, IMailer mailer)
{
    _logger = logger;
    _mailer = mailer;
}

Look on how we add the IMailer mailer variable as this becomes available for us when we did setup and add a singleton object in our startup. We then store the variable in our private variable for future usage.

We also create another method to handle new route /export for sending temporary weather report. Change it according to your own setup.

[HttpGet]
[Route("export")]
public async Task<IActionResult> ExportWeatherReport()
{
    await _mailer.SendEmailAsync("[email protected]", "Weather Report", "Detailed Weather Report");
    return NoContent();
}

In the code above we simply insert the mailer and called our exposed method SendEmailAsync . Check the full source below for details on what to packages needed to import on the module.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using MyProject.Services;

namespace MyProject.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class WeatherForecastController : ControllerBase
    {
        private static readonly string[] Summaries = new[]
        {
            "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
        };

        private readonly ILogger<WeatherForecastController> _logger;
        private readonly IMailer _mailer;

        public WeatherForecastController(ILogger<WeatherForecastController> logger, IMailer mailer)
        {
            _logger = logger;
            _mailer = mailer;
        }

        [HttpGet]
        public IEnumerable<WeatherForecast> Get()
        {
            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }

        [HttpGet]
        [Route("export")]
        public async Task<IActionResult> ExportWeatherReport()
        {
            await _mailer.SendEmailAsync("[email protected]", "Weather Report", "Detailed Weather Report");
            return NoContent();
        }
    }
}

When everything’s done we build and test the web API project. Execute the code below to check if there are any errors.

dotnet build

Then deploy or publish it on IIS (Internet Information Services), or rather just run it in isolated form which you can use dotnet run .

Conclusion

If you’re doing an email service always consider to make it as simple as possible to avoid any unintended bugs. Sending emails has never been easier this time around and you don’t need complicated flows as we switch to MailKit.

You can found the complete repository here .

Follow me for similar article, tips, and tricks ❤.

Gimp Automating Image Processing with Python Fu

What a large volume of adventures may be grasped within the span of his little life by him who interests his heart in everything.

— Laurence Sterne.

Hi guys, one night I decided to create an online store and sell some drop ship products. I grab some pictures from the wholesale seller and planned to customize those images (just to put some store branding).

There were a hundred images that I want to customize, by hand it would take ages so I decided to create a batch script. My first thought was to use Gimp (an open source image manipulator) and Script-Fu. After trying out what the result would be on Python-Fu console, I settled with this. It was a simple design but I was satisfied.

Here is the script that I use. Try it out on Python-Fu console and call it convert_to_poster(n), where n is the image number in the image list inside Gimp. You could for loop it for faster batch processing.

First we set the variables.

current_image = gimp.image_list()[n]
center_x = current_image.width / 2
center_y = current_image.height / 2
c_size = 600
c_size_d = c_size / 2 

Then we resize the image / canvas.

pdb.gimp_image_resize(current_image, c_size, c_size, c_size_d - center_x, c_size_d - center_y)

We create an empty background layer and just fill it with white.

bg_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "bg", 100, LAYER_MODE_NORMAL)
pdb.gimp_drawable_fill(bg_layer, FILL_WHITE)
pdb.gimp_image_add_layer(current_image, bg_layer, 1)

We will also set the main background and foreground color that would be use in future instruction set.

pdb.gimp_context_set_foreground("#960acc")
pdb.gimp_context_set_background("#000000")

Create branding layer and fill the rectangular region with indigo.

branding_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "branding", 100, LAYER_MODE_NORMAL)
pdb.gimp_image_add_layer(current_image, branding_layer, 1)

pdb.gimp_selection_none(current_image)
pdb.gimp_image_select_rectangle(current_image, CHANNEL_OP_ADD, 0, 0, c_size, 260)
pdb.gimp_drawable_edit_fill(branding_layer, FILL_FOREGROUND)

Get poster image and resize its layer to fit canvas.

poster_layer = current_image.layers[0]
pdb.gimp_layer_add_alpha(poster_layer)
pdb.gimp_layer_resize_to_image_size(poster_layer)

Select non-alpha on poster layer and grow.

pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
pdb.gimp_selection_grow(current_image, 3)

Create a mask and fill it with the background color that we set.

mask = pdb.gimp_layer_create_mask(branding_layer, ADD_MASK_WHITE)
pdb.gimp_layer_add_mask(branding_layer, mask)
pdb.gimp_layer_set_edit_mask(branding_layer, 1)
pdb.gimp_drawable_edit_fill(mask, FILL_BACKGROUND)

Add text node with color it white.

pdb.gimp_context_set_foreground("#ffffff")
text_layer = pdb.gimp_text_fontname(current_image, None, 10.0, 10.0, "Psyche Digital", 0, 1, 24.0, PIXELS, "SF Compact Display Heavy")

We select the main layer and add a legacy drop shadow.

pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
pdb.script_fu_drop_shadow(current_image, poster_layer, 10.0, 10.0, 10.0, (0, 0, 0, 255), 80, 0)

Here is the full script code:

def convert_to_poster(n):
    current_image = gimp.image_list()[n]
    center_x = current_image.width / 2
    center_y = current_image.height / 2
    c_size = 600
    c_size_d = c_size / 2 
    
    pdb.gimp_image_resize(current_image, c_size, c_size, c_size_d - center_x, c_size_d - center_y)

    bg_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "bg", 100, LAYER_MODE_NORMAL)
    pdb.gimp_drawable_fill(bg_layer, FILL_WHITE)
    pdb.gimp_image_add_layer(current_image, bg_layer, 1)

    pdb.gimp_context_set_foreground("#960acc")
    pdb.gimp_context_set_background("#000000")
    
    branding_layer = pdb.gimp_layer_new(current_image, c_size, c_size, RGBA_IMAGE, "branding", 100, LAYER_MODE_NORMAL)
    pdb.gimp_image_add_layer(current_image, branding_layer, 1)

    pdb.gimp_selection_none(current_image)
    pdb.gimp_image_select_rectangle(current_image, CHANNEL_OP_ADD, 0, 0, c_size, 260)
    pdb.gimp_drawable_edit_fill(branding_layer, FILL_FOREGROUND)

    poster_layer = current_image.layers[0]
    pdb.gimp_layer_add_alpha(poster_layer)
    pdb.gimp_layer_resize_to_image_size(poster_layer)

    pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
    pdb.gimp_selection_grow(current_image, 3)

    mask = pdb.gimp_layer_create_mask(branding_layer, ADD_MASK_WHITE)
    pdb.gimp_layer_add_mask(branding_layer, mask)
    pdb.gimp_layer_set_edit_mask(branding_layer, 1)
    pdb.gimp_drawable_edit_fill(mask, FILL_BACKGROUND)

    pdb.gimp_context_set_foreground("#ffffff")
    text_layer = pdb.gimp_text_fontname(current_image, None, 10.0, 10.0, "Psyche Digital", 0, 1, 24.0, PIXELS, "SF Compact Display Heavy")

    pdb.gimp_image_select_item(current_image, CHANNEL_OP_REPLACE, poster_layer)
    pdb.script_fu_drop_shadow(current_image, poster_layer, 10.0, 10.0, 10.0, (0, 0, 0, 255), 80, 0)

Run this code in Gimp console or in Script-Fu console.

🥰😝😜😁😎

Leave a like and subscribe for more hacks, tips, and tricks.

Using Vim Hex Editor To View Keyboard Key Hex Code

The best way to predict the future is to create it.

— Anonymous.

In this TIL (Today I Learned), we will review a way on how to view keyboard key hex code. As I modify my iTerm2 (a popular terminal emulator for macOS) key shortcuts to map my tmux Ctrl + b keys, I wonder how to get the keyboard key hex codes easily.

Then I remembered that there is the xxd (a command line hex viewer and editor which is part of the vim package) command which can process keys and convert them to hex code.

To start off, we run xxd from the terminal. It will wait for a read line. Execute your keystrokes (e.g. Ctrlb) then press enter to create a new line. After the new line add EOF (End Of File) which would corresponds to the keyboard keys Ctrl + d. After doing the process above xxd would output a hex representation of the keyboard key code that you desire.1

Another trick using xxd command is to reverse hex string like this.

echo <hex code> | xxd -revert -plain | rev | tr -d '\n' | xxd -plain

An example hex code would be 030201. That would output a reverse 010203. The rev command will reverse the output while the tr would trim newline.


  1. https://stackoverflow.com/questions/36321230/finding-the-hex-code-sequence-for-a-key-combination ↩︎

Checkout Specific Directory Within Git Repo

I believe that the first test of a truly great man is his humility. Really great men have a curious feeling that the greatness is not in them but through them. And they see something divine in every other man and are endlessly, incredibly merciful.

— John Ruskin.

One day I was working on a driver port to macOS (Apple Macintosh OS) and the only opensource code for it can be found on Linux kernel.

Heck! The Linux kernel repository is around 2GB including all history and I only needed a specific directory inside the repository. After searching the whole internet I found an answer1.

Here are the steps to clone a specific directory from a git repository:

  1. First and foremost you need to create a local blank repository on your workstation. git init <repo-url>
  2. Inside the created bare repository, map the remote URL of the remote repository you want to clone. cd <repo-name> git remote add origin <remote-repo-url>
  3. Then, setup the git config and specify that you’ll be doing a sparse checkout. git config core.sparsecheckout true
  4. Create and add all the directories you want to checkout in the sparse-checkout file that can be found in .git/info/sparse-checkout. echo "<needed-directory>/*" >> .git/info/sparse-checkout
  5. When all the above steps is done, finally pull the repository objects. git pull --depth=1 origin master

So guys if you have any questions? hit me up on my social media accounts. That’s all there is that is needed. Now its already cloned and can now be worked on.

❌ Originally posted on August 5, 2019.


  1. https://stackoverflow.com/a/28039894 ↩︎

C# .NET Projects Can Be Compiled and Run in MacOS or Linux

My primary goal of hacking was the intellectual curiosity, the seduction of adventure.

— Kevin Mitnick.

Before I never thought that a .NET solution project can be compiled and run on Linux. But as I’ve checked the GitHub of dotnet-core, I found there were many ways to do it.

First is through Mono, which is a compatible open source alternative to the .NET Framework (the latter is a proprietary of Microsoft). You can create WPF (Windows Presentation Foundation) forms using it and other UI intensive .NET projects. Mono is sponsored by Microsoft, but it is unofficially supported.

The other solution is, if your working on a .NET core project you’ll be using a dotnet-core. Microsoft published last 2014 an open source .NET SDK (Software Development Kit) (bare bones) which was derived from ASP.NET, they’ve called it dotnet-core. Basically, it is a stripped down version of .NET framework without all the heavy UI and forms. The project itself is modular and can be compiled in different platforms.

So if you’re planning to use C# or a .NET dependent language, don’t be afraid as it can be run and created using different platforms.

If you’re planning to install the package on Arch here is the command:

pacman -Sy dotnet-core

Or check the mono flavor:

pacman -Sy mono

That’s it guys, a brand new knowledge for me. Probably on my next project I’ll try to use .NET Core. 🤔 Hope you guys, enjoyed this article and as always live life.

Learning Git Shortcuts By Examples

Together we can change the world, just one random act of kindness at a time.

— Ron Hall.

Hi guys, I’ve been wondering before on how to create aliases in git (a version control system) but after reading and delving more into some documents. I found out that this is possible and can be done with a few global configs. Check out the commands below for some sample.

git config --global alias.co checkout
git config --global alias.br branch
git config --global alias.ci commit
git config --global alias.st status

This commands will emulate an SVN (subversion) like keywords and can be called on command line like git ci to commit your changes. Additionally, here are some useful aliases that I’ve found while exploring the internet.

git config --global alias.hist "log --pretty=format:'%h %ad | %s%d [%an]' --graph --date=short"
git config --global alias.type 'cat-file -t'
git config --global alias.dump 'cat-file -p'

To conclude there are many ways to adjust your workflow (especially if your coming from one VCS to another), one of those things is simplifying commands to emulate a certain behavior.

So guys, what are the things you know you could do on git to simplify your workflow? Hope you enjoyed this article!

Running Seeds After Edeliver Deploy

Success is neither magical nor mysterious. Success is the natural consequence of consistently applying basic fundamentals.

— E. James Rohn.

After deploying your application to production or staging server, have you ever wondered on how you can import your seed data. This post is specifically targeted to elixir web apps only.

Assuming you haven’t already started the migration process and migrated all the tables. Go to the built edeliver release folder and run ./bin/my_app_name remote_console to access iex console for this web OTP (Open Telecom Platform) app.

When you’re inside the iex console, enter this command:

:code.priv_dir(:my_app_name)
|> Path.join("repo/seeds.exs") 
|> Code.require_file()

The command above will be evaluated by the console interpreter and run the seeds file. This would access the file from priv directory and execute seeds.exs. After that, check your database if all the migrations has been executed.

That’s all there is to it guys, check my other post for some additional tips and tricks. Hope you enjoyed this article!

Debugging PHP with Xdebug over SSH

The pessimist sees difficulty in every opportunity. The optimist sees opportunity in every difficulty.

— Winston Churchill.

Back in days before Xdebug1 came to light, debugging PHP2 was pretty simple like in C3 on which you use print or var_dump. Today with improving tools and setup, PHP can now be debugged remotely with the help of ssh + xdebug.

To do this first you must connect to the remote development station on where PHP server is located (e.g. PHP direct dynamic server, Nginx, Apache HTTP).

ssh -R 9000:localhost:9000 [email protected]_goes_here

The -R stands for return remote callback connection on SSH on which server --(poll)--> host (xdebug server).

After running a remote port forwarding you will need to setup xdebug on the remote server, point it on which address it would bind and listen. For my use case I’d point mine to the loopback address 127.0.0.1 (as I’d only use it for development purposes) on which we setup remote port forwarding to listen to. Edit the php.ini file and at the xdebug configuration make it similar to the sample below.

[xdebug]
zend_extension="/usr/lib64/php/7.1/modules/xdebug.so"
xdebug.remote_autostart=1
xdebug.remote_enable=1
xdebug.remote_host=127.0.0.1
xdebug.remote_port=9000
xdebug.remote_log=/tmp/xdebug_remote.log
xdebug.remote_connect_back=0

The most notable part in the config is the xdebug.remote_host and xdebug.remote_port, this is where xdebug would bind. After all the changes have been done, just restart the server (e.g. for Apache HTTP running on CentOS).

service httpd restart

And at our workstation editor with PHP debugger extension we would just point it to 127.0.0.1:9000. That’s all now we can debug PHP remotely.

🧡🧡🧡🧡🧡

Hope you guys enjoyed this article!


  1. Xdebug is a PHP extension which provides debugging and profiling capabilities. It uses the DBGp debugging protocol. ↩︎
  2. PHP is a general-purpose programming language originally designed for web development. It was originally created by Rasmus Lerdorf in 1994; the PHP reference implementation is now produced by The PHP Group. ↩︎
  3. C is a general-purpose, procedural computer programming language supporting structured programming, lexical variable scope, and recursion, while a static type system prevents unintended operations. ↩︎