<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Daily Awesome]]></title><description><![CDATA[The Daily Awesome]]></description><link>http://aeissa.dev/</link><generator>Ghost 5.79</generator><lastBuildDate>Sun, 03 May 2026 11:28:53 GMT</lastBuildDate><atom:link href="http://aeissa.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Deploy with AWS ECS]]></title><description><![CDATA[<p>Deploying with AWS ECS requires provisioning AWS resources on which the service is run and configuring the service itself. These are abbreviated notes taken during my attempt to deploy three services with AWS ECS. Important to note these notes for EC2 and not Fargate.</p><h2 id="aws-ec2-provisioning">AWS EC2 Provisioning</h2><h3 id="auto-scaling-groups">Auto Scaling Groups</h3>]]></description><link>http://aeissa.dev/deploy-to-aws-ecs/</link><guid isPermaLink="false">666ae823cfa52b163870a4f0</guid><dc:creator><![CDATA[Ali Eissa]]></dc:creator><pubDate>Sun, 30 Jun 2024 17:02:32 GMT</pubDate><content:encoded><![CDATA[<p>Deploying with AWS ECS requires provisioning AWS resources on which the service is run and configuring the service itself. These are abbreviated notes taken during my attempt to deploy three services with AWS ECS. Important to note these notes for EC2 and not Fargate.</p><h2 id="aws-ec2-provisioning">AWS EC2 Provisioning</h2><h3 id="auto-scaling-groups">Auto Scaling Groups</h3><p>Auto Scaling Groups are a group of EC2 instances that are scaled out and scaled in response to some events and according to scaling policies, if one is configured. The number of instances is controlled by three variables</p><p><strong>Desired Capacity</strong>: The number of instances running when Auto Scaling Group is launched</p><p><strong>Minimum Capacity</strong>: The minimum number of instances running. EC2 Auto Scaling will keep this number of instances regardless of what the scale in event indicated.</p><p><strong>Maximum Capacity</strong>: The maximum number of instances running. EC2 Auto Scaling will keep this number of instances regardless of what the scale out event indicated.</p><h3 id="launch-templates">Launch Templates</h3><p>The template that AWS EC2 Auto Scaling uses to launch instances. For the instances to be usable the configuration parameters below must be assigned appropriately.</p><p><strong>Amazon Machine Image (AMI)</strong>: The image from which the AWS EC2 instance is launched. The AMI must be AWS ECS optimized, i.e. have the AWS ECS agent.</p><p><strong>User Data</strong>: The script run when an AWS EC2 is launched. This script must register the name of the AWS ECS cluster to which the instance must belong. In the example below the instance will be resigtered with the cluster acme.</p><pre><code>#!/bin/bash
echo ECS_CLUSTER=acme &gt;&gt; /etc/ecs/ecs.config</code></pre><h2 id="aws-ecs">AWS ECS</h2><h3 id="cluster">Cluster</h3><p>An ECS cluster is what provides the infrastructure on which the containers are  deployed.Every AWS ECS resource is deployed in a cluster. </p><h3 id="tasks">Tasks</h3><p>A task is the lowest level of abstraction in AWS ECS, it is where containers are deployed.AWS ECS deploys a task according to parameters defined in a configuration file, called task definition.</p><p>The following parameters were important in deploying spaced reps</p><p><strong>Container Definitions:</strong> An array of JSON documents in which one defines how Docker should run containers, it&apos;s effectively a list of dockerfiles.</p><p><strong>Execution Role ARN</strong>: The Amazon Resource Number(ARN) of the role that Docker daemon and AWS ECS agent use to a deploy a container. Docker uses this role to pull the docker image and run a container while the AWS ECS agent uses it to send logs to CloudWatch.</p><p>There is an AWS managed policy, AmazonECSTaskExecutionRolePolicy, with permissions to read from ECR and write to CloudWatch, so it is a matter of attaching that policy to a role and use the ARN of that role.</p><p><strong>Network Mode:</strong> The network mode that AWS ECS assigns to the network interface of the container.</p><p>AWS recommends awsvpc network mode, but in that mode container  network interfaces are not assigned public IP addresses so the containers will not have access to the internet without a NAT gateway.</p><h3 id="services">Services</h3><p>An AWS ECS service is a group of tasks deployed using a given task definition. It is the abstraction that makes the tasks, and in turn the containers, scaleable.</p><p><strong>Desired Tasks</strong>: The number of tasks to run.</p><p><strong>Deployment Type: </strong>The service deployment type, either Rolling or Blue/Green.</p><p>The Rolling type has two sub configuration values, minimum running tasks % and maximum running tasks %. These values must be set such that there are enough resource for a deployment to be successful. For example a minimum running tasks % of 100 and maximum running tasks % of 150 requires that there are enough resources for 150%  of number of tasks to be deployed.</p><p>While Blue/Green requires enough resources to deploy double the number of desired tasks to be run.</p><p><strong>Network Configuration</strong>: The configuration assigned to the network interfaces of the containers launched. This is required only when the network mode in the task definition is set to awsvpc. For the other modes, the containers share the network interface of their host so the network configuration assigned to their hosts&apos; interface applies to them.</p><p><strong>Load Balancer: </strong>Load balancer to distribute traffic between containers</p><p><strong>Capacity Provider: </strong>This is the abstraction that ensures the availability of the EC2 instances on which the tasks, i.e. containers, are deployed. It sends the scaling events to the auto scaling groups according to the needs of the AWS ECS service to which it is assigned.</p>]]></content:encoded></item><item><title><![CDATA[Traverse a DOM Tree]]></title><description><![CDATA[<p>There was a need at work to apply inline styles to each DOM element in a page.This was a rare occasion to actually traverse a tree, an exercise many of us have done, I am sure, to practice for interviews or as part of homework.</p><pre><code class="language-javascript">const inlineStyles = (node) =&gt;</code></pre>]]></description><link>http://aeissa.dev/traversing-a-dom-tree/</link><guid isPermaLink="false">66227f82cfa52b163870a43f</guid><dc:creator><![CDATA[Ali Eissa]]></dc:creator><pubDate>Thu, 13 Jun 2024 12:35:51 GMT</pubDate><content:encoded><![CDATA[<p>There was a need at work to apply inline styles to each DOM element in a page.This was a rare occasion to actually traverse a tree, an exercise many of us have done, I am sure, to practice for interviews or as part of homework.</p><pre><code class="language-javascript">const inlineStyles = (node) =&gt; {
  const clone = node.cloneNode();
  const computedStyles = window.getComputedStyle(node);
  Object.keys(computedStyles).forEach((key) =&gt; {
    clone.style.setProperty(key, computedStyles.getPropertyValue(key));
  });

  return clone;
}

function traverseDOM(node) {
  if (!node) {
    return;
  }

  const updatedNode = inlineStyles(node);
  for (let child of updatedNode.children) {
    const updatedChild = traverseDOM(child);
    updatedNode.appendChild(updatedChild);
  }

  return updatedNode;
}</code></pre><p>The operation performed on each node is&#xA0;<code>inlineStyle</code>. It gets the computed styles of a node and applies them as inline styles.</p><p>This is a DOM tree so it is not built in a specific order and each node has many children, so it is not a binary seach tree.Were it a binary search tree, then our traversal order would be pre-order; We perform the operation on the root node first and then traverse on the children from increasing index position, i.e. from left most to right most.</p>]]></content:encoded></item><item><title><![CDATA[Upload file to an Express Server]]></title><description><![CDATA[<p>I was working on creating a proof of concept RabbitMQ data pipeline in Node where a web app would upload a large csv file to an Express server and the server would stream its content into the pipeline in JSON.</p><p>There are two possibilities of uploading a file<br>1. Send</p>]]></description><link>http://aeissa.dev/upload-file-to-express-server/</link><guid isPermaLink="false">66228052cfa52b163870a44b</guid><dc:creator><![CDATA[Ali Eissa]]></dc:creator><pubDate>Thu, 13 Jun 2024 12:35:21 GMT</pubDate><content:encoded><![CDATA[<p>I was working on creating a proof of concept RabbitMQ data pipeline in Node where a web app would upload a large csv file to an Express server and the server would stream its content into the pipeline in JSON.</p><p>There are two possibilities of uploading a file<br>1. Send entire file<br>2. Stream file</p><h4 id="send-entire-file">Send Entire File</h4><p>Send entire csv file from browser</p><pre><code class="language-js">

  fetch(&apos;http://localhost:3000/upload&apos;, {
    method: &apos;POST&apos;,
    headers: {
      &apos;Content-Type&apos;: &apos;text/csv&apos; 
    },
    body: file 
  })
  .then(success =&gt; console.log(success))
  .catch(error =&gt; console.log(error))

</code></pre><p>The two important points int the server are</p><ol><li>How to handle request</li><li>How to stream csv file content as json into pipeline</li></ol><p>To get a stream of JSON objects from the csv file, create a stream and pipe that stream into&#xA0;<code>fast-csv</code>.</p><pre><code class="language-js">
  const app = require(&apos;express&apos;)()
  const textBodyParser = require(&apos;body-parser&apos;).text
  const csv = require(&apos;fast-csv&apos;)
  const { Readable } = require(&apos;stream&apos;)
  
  // Handle very large file
  app.use(text({ type: &apos;text/csv&apos;, limit: &apos;500mb&apos; }))
  
  app.post(&apos;/upload&apos;, (req, res) =&gt; {
    const content = Readable.from(req.body)
    content
      .pipe(csv.parse({ headers: true }))
      .on(&apos;data&apos;, (data) =&gt; {
        console.log(data) // Handle JSON object
      })
    res.sendStatus(200)
  })
</code></pre><h4 id="stream-file">Stream File</h4><p>Stream csv file from browser</p><pre><code>
  //Important that file is sent as FormData
  const data = new FormData()
  data.append(&apos;file&apos;, file)
  fetch(&apos;http://localhost:3000/upload&apos;, {
    method: &apos;POST&apos;,
    body: data,
  })
  .then((success) =&gt; console.log(success)) 
  .catch((error) =&gt; console.log(error))
  </code></pre><p>For the server to handle the stream, the HTTP request must have the header&#xA0;<code>Content-Type: multipart/form-data; boundary=aBoundaryString</code>, more info found&#xA0;<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types#multipartform-data">here</a>.<br>By sending the file as form data we can avoid having to specify this header.The browser will take care of it.</p><p>Use&#xA0;<code>busboy</code>&#xA0;to get the file stream and pipe that to&#xA0;<code>fast-csv</code>&#xA0;to get a stream of JSON objects.</p><p>The resulting code</p><pre><code>
  app.post(&apos;/upload&apos;, (req, res) =&gt; {
    const busboy = new Busboy({ headers: req.headers })
    // Busboy gives us a lot information regarding the file
    busboy.on(&apos;file&apos;, (__, file) =&gt; {
      file.pipe(csv.parse({ headers: true })).on(&apos;data&apos;, (row) =&gt; {
        // Handle data here. Row is a csv row in JSON
        console.log(&apos;Row in JSON&apos;, row) 
      })
      file.on(&apos;end&apos;, function () {
        // Handle end case here
        console.log(&apos;file ended&apos;)
      })
    })
    busboy.on(&apos;finish&apos;, function () {
      res.writeHead(303, { Connection: &apos;close&apos;, Location: &apos;/&apos; })
      res.end()
    })
    req.pipe(busboy)
  })
  </code></pre>]]></content:encoded></item><item><title><![CDATA[Configure Git]]></title><description><![CDATA[<p>Git can be configured to improve a developer&apos;s workflow. I use git from the command line, so I rely on aliases and hooks.</p><h2 id="aliases"><strong>Aliases</strong></h2><p>Git aliases allow one to decrease the amount of typing and the need to memorize exact commands.They are defined in <code>.gitconfig</code> file, under</p>]]></description><link>http://aeissa.dev/git-configuration/</link><guid isPermaLink="false">66227940cfa52b163870a40a</guid><dc:creator><![CDATA[Ali Eissa]]></dc:creator><pubDate>Fri, 19 Apr 2024 14:12:47 GMT</pubDate><content:encoded><![CDATA[<p>Git can be configured to improve a developer&apos;s workflow. I use git from the command line, so I rely on aliases and hooks.</p><h2 id="aliases"><strong>Aliases</strong></h2><p>Git aliases allow one to decrease the amount of typing and the need to memorize exact commands.They are defined in <code>.gitconfig</code> file, under the <code>alias</code> section. </p><pre><code>[alias]
	c=commit
	lg=&quot;git log -1&quot;</code></pre><p>In the example above, the aliases <code>c</code> and <code>lg</code>  allow one to run <code>git c</code> and <code>git log</code> instead of <code>git commit</code> or <code>git log -1</code> .</p><p>The alias section in my <code>.gitconfig</code> is as follows </p><pre><code>[alias]
  a=add
  b=branch
	c=commit
  p=push
  s=status
  # Add a patch
	ap=&quot;!git add ${@} -p; git status&quot;
  # Simple Checkout
	co=&quot;!git checkout ${@}; git status&quot;
  # Display last commit (HEAD)
	lst=&quot;!git log -1 | cat&quot;
  # Search for a branch
	sr=&quot;!git branch -a | grep $2&quot;
  # Discard changes
	unco=&quot;!git checkout -- ${@}; git status&quot;
  # Unstage changes
	unst=&quot;!git reset HEAD -- ${@}; git status&quot;</code></pre><p>The <code>!</code> tells to git to run the command as if in a terminal.</p><h2 id="hooks">Hooks</h2><p>Git hooks allow one to tap into the the git lifecycle events.This gives one the ability to automate routine tasks. I use  the following hooks</p><ol>
<li>pre-commit</li>
<li>commit-msg</li>
<li>pre-push</li>
</ol>
<h3 id="pre-commit">Pre-commit</h3><p>This hook is run before any of the commit related git lifecycle events are fired. I use it to do the linting and type checking.</p><pre><code class="language-sh">#!/bin/sh
set -e

changed_files=&quot;$(git diff --staged --name-only *.{ts,tsx})&quot;

## Linter on changed files
echo &quot;Running ESLint on changed files&quot;
echo $changed_files | xargs npx eslint

## Typechecker on changed files
echo &quot;Running typechecker on changed files&quot;
echo $changed_files | xargs npx tsc --noEmit</code></pre><h3 id="commit-msg">Commit-msg</h3><p>This hook is run after a commit message has been written and before the commit has been applied.I use it to ensure that the commit message header is prefixed with the issue/ticket id.</p><pre><code class="language-sh">#!/bin/sh
set -e

# Read first line from EDIT_COMMITMSG file
read -r msg&lt;$1

# Get current branch name
branch=$(git symbolic-ref --short -q HEAD)

# If branch name is properly named, then extract XXX-1234 prefix.
# It will used to prefix the commit message.
if [[ $branch =~ ^[a-zA-Z]{3}-[0-9]+ ]]; then
  prefix=&quot;${BASH_REMATCH[0]}&quot;
else 
  echo &quot;Branch name is not prefixed with XXX-1234.&quot;
  exit 1
fi

if [[ $msg =~ ^[a-zA-Z]{3}-[0-9]+ ]]; then
  echo &quot;The commit message properly formatted.&quot;
else
  msg=&quot;$prefix $msg&quot;
  # Replace the commit message header with the prefixed one
  gsed -i &quot;1s/.*/$msg/&quot; $1
  echo &quot;Prefixed the commit message with $prefix&quot;
fi</code></pre><h3 id="pre-push">Pre-push</h3><p>This hook is run before one pushes to a remote branch. I use it to run tests, this is specially helpful when pushing to an open PR.</p><pre><code class="language-bash">#!/bin/bash

npm test -- --all --watchAll=false --silent</code></pre>]]></content:encoded></item></channel></rss>