<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>DevOps experience</title>
  <subtitle>Notes on infrastructure, tooling, and the small bits in between.</subtitle>
  <link href="https://artemstar.com/"/>
  <link rel="self" href="https://artemstar.com/atom.xml"/>
  <id>https://artemstar.com/</id>
  <updated>2026-05-12T06:45:28Z</updated>
<entry>
    <title>Tailscale. A VPN you forget is even there.</title>
    <link href="https://artemstar.com/2026/05/10/tailscale-a-vpn-you-forget-about/"/>
    <id>https://artemstar.com/2026/05/10/tailscale-a-vpn-you-forget-about/</id>
    <updated>2026-05-10T00:00:00Z</updated>
    <published>2026-05-10T00:00:00Z</published>
    <content type="html">&lt;p&gt;There’s a two-part series on this blog from 2017 about setting up Cisco ASA with FreeRADIUS and two-factor authentication. The respectful suggestion in 2026 is: don’t do any of that anymore.&lt;/p&gt;&lt;p&gt;Specifically, if you are a small team or an individual and you need a VPN, you almost certainly do not need to operate a VPN gateway. The whole category got reinvented around &lt;a href="https://www.wireguard.com/"&gt;WireGuard&lt;/a&gt;, and the practical face of it for most people is &lt;a href="https://tailscale.com/"&gt;Tailscale&lt;/a&gt;.&lt;/p&gt;&lt;h3 id="what-tailscale-is"&gt;What Tailscale is&lt;/h3&gt;&lt;p&gt;Tailscale is a control plane on top of WireGuard. WireGuard itself is a kernel module (on Linux) or a userspace library, and a tiny, beautiful protocol. The hard part of WireGuard, historically, has been key distribution, peer discovery, and NAT traversal. Tailscale handles those.&lt;/p&gt;&lt;p&gt;You install the client on every device you own — laptop, server, phone, NAS, whatever — log into a SSO provider, and they all show up as machines on a private network with stable IPs in the 100.64/10 range. Connections between any two of those machines go peer-to-peer over WireGuard, with the Tailscale coordination server only used for handshake.&lt;/p&gt;&lt;p&gt;The user experience is: &lt;code class="highlighter-rouge"&gt;ssh prod-bastion&lt;/code&gt; and you’re in. There’s no VPN client to start, no tunnel to bring up, no kill switch toggle. Tailscale is running in the background. That is the whole product.&lt;/p&gt;&lt;h3 id="setting-it-up-on-a-server"&gt;Setting it up on a server&lt;/h3&gt;&lt;p&gt;For comparison, the 2017 Cisco posts were about 3,000 words of config between them. Here is the 2026 equivalent for a Linux server:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ curl -fsSL https://tailscale.com/install.sh | sh
$ sudo tailscale up --ssh&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;That is it. The &lt;code class="highlighter-rouge"&gt;--ssh&lt;/code&gt; flag opts the host into Tailscale SSH, which uses your Tailscale identity instead of static SSH keys. ACLs are configured centrally and applied across all your machines.&lt;/p&gt;&lt;p&gt;For an unattended server you want non-interactive setup, which uses an auth key:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ sudo tailscale up \
    --authkey=tskey-auth-… \
    --hostname=prod-bastion \
    --ssh \
    --advertise-tags=tag:server&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Auth keys can be ephemeral (the machine disappears from the tailnet when it’s offline for a while), reusable, or one-shot. For ECS / Kubernetes workloads I use ephemeral, tagged keys generated from Terraform.&lt;/p&gt;&lt;h3 id="acls-are-actually-good"&gt;ACLs are actually good&lt;/h3&gt;&lt;p&gt;The ACL file is a single JSON / HuJSON document. It is short, readable, and lives in version control. A starter that covers most teams:&lt;/p&gt;&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;{
  "tagOwners": {
    "tag:server":  ["group:admins"],
    "tag:db":      ["group:admins"],
    "tag:dev":     ["autogroup:member"]
  },
  "groups": {
    "group:admins": ["artem@example.com"]
  },
  "acls": [
    // everyone can talk to dev boxes
    { "action": "accept", "src": ["autogroup:member"], "dst": ["tag:dev:*"] },
    // only admins can talk to db boxes
    { "action": "accept", "src": ["group:admins"],     "dst": ["tag:db:*"]  },
    // server-to-server within the tailnet
    { "action": "accept", "src": ["tag:server"],       "dst": ["tag:server:*"] }
  ],
  "ssh": [
    { "action": "check", "src": ["autogroup:member"], "dst": ["autogroup:self"], "users": ["autogroup:nonroot"] }
  ]
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;"check"&lt;/code&gt; action on the SSH rule means the connection still requires you to re-authenticate via the IdP — basically a soft 2FA prompt — before SSH lets you in. The first time you see this in production it feels like cheating.&lt;/p&gt;&lt;h3 id="what-about-private-networks"&gt;What about my private network?&lt;/h3&gt;&lt;p&gt;If you have a VPC or an on-prem subnet that you want reachable from the tailnet, you put a Tailscale client on one box in that subnet and tell it to advertise the routes. The Cisco ASA equivalent of this used to be a series of routing tables and IPSec phase-2 selectors. Now it is one flag:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ sudo tailscale up \
    --advertise-routes=10.0.0.0/16 \
    --ssh&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Then in the admin console you approve the advertised route once. Done.&lt;/p&gt;&lt;h3 id="the-honest-trade-offs"&gt;The honest trade-offs&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;The coordination plane is hosted by Tailscale Inc. (You can self-host it with &lt;a href="https://github.com/juanfont/headscale"&gt;Headscale&lt;/a&gt;, which is fine, but you give up the polish.)&lt;/li&gt;&lt;li&gt;For a small team Tailscale is free. Above 3 users you’re on a paid plan. For a personal project this never matters.&lt;/li&gt;&lt;li&gt;Throughput is limited by WireGuard itself, which is fast in absolute terms but obviously less fast than no tunnel at all. For everything I do — SSH, web admin, occasional file copy — it does not matter.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That is the recommendation. There is no good reason to run a corporate VPN gateway by hand in 2026 unless something specific forces your hand. The 2017 Cisco posts are preserved here for historical interest — and as a reminder that the right answer to a problem changes if you wait long enough.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>GitHub Actions for AWS deployments. A small, sane setup.</title>
    <link href="https://artemstar.com/2026/04/08/github-actions-for-aws-deploys/"/>
    <id>https://artemstar.com/2026/04/08/github-actions-for-aws-deploys/</id>
    <updated>2026-04-08T00:00:00Z</updated>
    <published>2026-04-08T00:00:00Z</published>
    <content type="html">&lt;p&gt;This is the GitHub Actions starter I want everyone on my team to use for AWS deploys. No long-lived access keys, no plaintext secrets, no magic. The whole thing is about 60 lines of YAML.&lt;/p&gt;&lt;p&gt;This is not a rerun of the older &lt;a href="/2017/08/12/aws-lambda-github-bot/"&gt;Lambda-and-GitHub post&lt;/a&gt; on this blog. That one was about reacting to GitHub webhooks from AWS. This is the inverse: deploying to AWS from GitHub. Different direction, different problem.&lt;/p&gt;&lt;h3 id="the-old-way"&gt;The old way&lt;/h3&gt;&lt;p&gt;For years the standard pattern was:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Create an IAM user.&lt;/li&gt;&lt;li&gt;Generate an access key for it.&lt;/li&gt;&lt;li&gt;Paste the key into a GitHub repository secret.&lt;/li&gt;&lt;li&gt;Hope the key never leaks. Rotate when you remember.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;It worked. It was also the source of basically every “our production access keys ended up in a public repo” incident.&lt;/p&gt;&lt;h3 id="the-new-way-oidc"&gt;The new way: OIDC&lt;/h3&gt;&lt;p&gt;GitHub Actions can be an OIDC identity provider. Same idea as IRSA on EKS — your workflow gets a short-lived JWT signed by GitHub, AWS trusts that JWT under specific conditions, the workflow assumes a role with no long-lived secret ever existing.&lt;/p&gt;&lt;p&gt;The pieces:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Register &lt;code class="highlighter-rouge"&gt;token.actions.githubusercontent.com&lt;/code&gt; as an OIDC provider in IAM.&lt;/li&gt;&lt;li&gt;Create an IAM role whose trust policy allows assumption from that provider, scoped to your repo and (ideally) a specific branch or environment.&lt;/li&gt;&lt;li&gt;In the workflow, call &lt;code class="highlighter-rouge"&gt;aws-actions/configure-aws-credentials@v4&lt;/code&gt; with &lt;code class="highlighter-rouge"&gt;role-to-assume&lt;/code&gt;. No secrets.&lt;/li&gt;&lt;/ol&gt;&lt;h3 id="the-trust-policy"&gt;The trust policy&lt;/h3&gt;&lt;p&gt;The least-permissive version pins the role to a single repo and a single ref:&lt;/p&gt;&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
      "Federated": "arn:aws:iam::1111…:oidc-provider/token.actions.githubusercontent.com"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
      },
      "StringLike": {
        "token.actions.githubusercontent.com:sub": "repo:yourorg/my-service:ref:refs/heads/main"
      }
    }
  }]
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;For pull-request previews you would broaden the &lt;code class="highlighter-rouge"&gt;sub&lt;/code&gt; pattern, for example:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;"token.actions.githubusercontent.com:sub": "repo:yourorg/my-service:pull_request"&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;And for a tag-driven release flow:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;"token.actions.githubusercontent.com:sub": "repo:yourorg/my-service:ref:refs/tags/v*"&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Whatever you do, don’t use &lt;code class="highlighter-rouge"&gt;repo:org/*&lt;/code&gt;. That trusts every workflow in every repo of the org. People do this. People shouldn’t.&lt;/p&gt;&lt;h3 id="the-workflow"&gt;The workflow&lt;/h3&gt;&lt;p&gt;This is the whole thing. &lt;code class="highlighter-rouge"&gt;.github/workflows/deploy.yml&lt;/code&gt;:&lt;/p&gt;&lt;div class="language-yaml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;name: deploy

on:
  push:
    branches: [main]

permissions:
  id-token: write       # required for OIDC
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4

      - name: Assume deploy role
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::1111…:role/github-deploy-my-service
          aws-region: eu-west-1

      - name: Build &amp; push image
        run: |
          ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
          REPO=$ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/my-service
          aws ecr get-login-password --region eu-west-1 \
            | docker login --username AWS --password-stdin $REPO
          docker build -t $REPO:${{ github.sha }} .
          docker push $REPO:${{ github.sha }}

      - name: Deploy
        run: |
          aws ecs update-service \
            --cluster prod \
            --service my-service \
            --force-new-deployment&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Two things worth highlighting:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The &lt;code class="highlighter-rouge"&gt;permissions:&lt;/code&gt; block at the top is mandatory. Without &lt;code class="highlighter-rouge"&gt;id-token: write&lt;/code&gt; the OIDC token isn’t minted, and you’ll get a confusing 403 from STS.&lt;/li&gt;&lt;li&gt;&lt;code class="highlighter-rouge"&gt;environment: production&lt;/code&gt; hooks this job into GitHub’s environments feature, which lets you require approvals, restrict deploys to specific branches, and have per-environment OIDC subjects. Use it.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id="the-bit-i-keep-forgetting"&gt;The bit I keep forgetting&lt;/h3&gt;&lt;p&gt;GitHub Actions runners can be slow to publish their token to STS during high-load periods. &lt;code class="highlighter-rouge"&gt;configure-aws-credentials&lt;/code&gt; handles the retry for you, but if you write the assume-role call by hand using the AWS CLI you’ll occasionally see &lt;code class="highlighter-rouge"&gt;InvalidIdentityToken&lt;/code&gt; on the first try. The action is doing more for you than it looks like.&lt;/p&gt;&lt;p&gt;That’s it. Sixty lines, zero long-lived secrets, scoped to one repo and one branch. If your AWS deploy workflow still uses access keys, this is the migration to do this quarter.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>OpenTofu vs Terraform. Should I switch?</title>
    <link href="https://artemstar.com/2026/03/15/opentofu-vs-terraform/"/>
    <id>https://artemstar.com/2026/03/15/opentofu-vs-terraform/</id>
    <updated>2026-03-15T00:00:00Z</updated>
    <published>2026-03-15T00:00:00Z</published>
    <content type="html">&lt;p&gt;Terraform isn’t MPL anymore. OpenTofu is the community fork. If you’re responsible for a Terraform codebase you probably want a considered answer to “should I switch.” Here is mine.&lt;/p&gt;&lt;h3 id="what-actually-happened"&gt;What actually happened&lt;/h3&gt;&lt;p&gt;Quick recap, with as little drama as possible:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;August 2023: HashiCorp re-licensed Terraform (and all their other products) from MPL 2.0 to the Business Source License. The TL;DR: still free for most uses, not free if you compete with HashiCorp’s commercial products.&lt;/li&gt;&lt;li&gt;September 2023: A coalition of vendors and individuals announced a fork. It was renamed to OpenTofu and adopted by the Linux Foundation in early 2024.&lt;/li&gt;&lt;li&gt;2024–2025: OpenTofu released 1.6, 1.7, 1.8, 1.9. It tracks Terraform’s features pretty closely but has shipped some of its own — early-evaluation variables, encrypted state files, the &lt;code class="highlighter-rouge"&gt;for_each&lt;/code&gt; in provider blocks.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That is the entire substance of the “drama.” The boring practical question is whether the syntax you already write still works.&lt;/p&gt;&lt;h3 id="the-practical-answer"&gt;The practical answer&lt;/h3&gt;&lt;p&gt;For almost everyone the answer is: yes, both still work, OpenTofu is a drop-in replacement, you can switch the binary and your code is fine. I have done the switch on two reasonable-sized codebases. Both times it went like this:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ brew install opentofu       # or whatever your package manager is
$ tofu init
$ tofu plan
$ tofu apply&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;That is the migration. The state file format is compatible. &lt;code class="highlighter-rouge"&gt;terraform_remote_state&lt;/code&gt; still works. Modules from the registry still install. The same providers from &lt;code class="highlighter-rouge"&gt;registry.terraform.io&lt;/code&gt; can be used, and OpenTofu also has its own registry at &lt;code class="highlighter-rouge"&gt;registry.opentofu.org&lt;/code&gt; mirroring most of them.&lt;/p&gt;&lt;p&gt;The minor wrinkle is the binary name. &lt;code class="highlighter-rouge"&gt;tofu&lt;/code&gt; instead of &lt;code class="highlighter-rouge"&gt;terraform&lt;/code&gt;. If you have CI scripts or developer workflows hardcoded to &lt;code class="highlighter-rouge"&gt;terraform plan&lt;/code&gt;, that’s a search-and-replace. Some teams alias &lt;code class="highlighter-rouge"&gt;terraform&lt;/code&gt; to &lt;code class="highlighter-rouge"&gt;tofu&lt;/code&gt; on shared runners during the transition. That works, but I would rather rip the band-aid off.&lt;/p&gt;&lt;h3 id="should-you-switch"&gt;Should you switch?&lt;/h3&gt;&lt;p&gt;The reasons to switch:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;License clarity.&lt;/strong&gt; If your company has a clear policy about not using BSL software in production, the decision is made for you.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;You want the new features.&lt;/strong&gt; Encrypted state out of the box is the one I have actually used.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;You don’t want to depend on a single vendor’s commercial roadmap for what is, by now, very foundational software.&lt;/strong&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The reasons not to switch:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;You use HCP Terraform / Terraform Cloud / Sentinel policies.&lt;/strong&gt; That ecosystem is HashiCorp-only. OpenTofu does not currently aim to clone it.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;You use a specific recent feature in stable Terraform that OpenTofu has not yet shipped.&lt;/strong&gt; The gap is shrinking but they are not exactly the same software.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Your team has zero appetite for tool-name changes&lt;/strong&gt; — which, fair enough, is sometimes the only reason that matters.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id="running-both"&gt;Running both&lt;/h3&gt;&lt;p&gt;You don’t actually have to choose. The two binaries can operate on the same code, sometimes even on the same state, as long as you avoid features that exist in only one of them. I have a small CI matrix on one project that runs &lt;code class="highlighter-rouge"&gt;terraform fmt -check&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;tofu fmt -check&lt;/code&gt;, and &lt;code class="highlighter-rouge"&gt;terraform validate&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;tofu validate&lt;/code&gt;, as a smoke test. It catches accidental drift.&lt;/p&gt;&lt;div class="language-yaml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;jobs:
  lint:
    strategy:
      matrix:
        tool: [terraform, tofu]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
        if: matrix.tool == 'terraform'
      - uses: opentofu/setup-opentofu@v1
        if: matrix.tool == 'tofu'
      - run: ${{ matrix.tool }} fmt -check -recursive
      - run: ${{ matrix.tool }} validate&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;h3 id="what-i-actually-do"&gt;What I actually do&lt;/h3&gt;&lt;p&gt;I run OpenTofu locally and in CI for everything new. I have not migrated a couple of older systems that are tied to a Terraform Cloud workspace. I don’t feel any urgency about it. The point of infrastructure-as-code was always that the code was the durable artifact, not the binary that interprets it.&lt;/p&gt;&lt;p&gt;The thing I would not do is wait for the dust to settle. The dust has settled. OpenTofu is here, it is fine, the licensing is clean. If you have a quiet afternoon, switch and stop thinking about it.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>IAM roles for service accounts on EKS. A small primer.</title>
    <link href="https://artemstar.com/2026/02/20/iam-roles-for-service-accounts-eks/"/>
    <id>https://artemstar.com/2026/02/20/iam-roles-for-service-accounts-eks/</id>
    <updated>2026-02-20T00:00:00Z</updated>
    <published>2026-02-20T00:00:00Z</published>
    <content type="html">&lt;p&gt;One of the small things in EKS that quietly fixes a very old problem: how do my pods get AWS credentials without me stuffing access keys into Kubernetes secrets?&lt;/p&gt;&lt;p&gt;If you have only ever used IAM roles on EC2 instances, the story so far is: the instance has a role, the instance metadata service hands out temporary credentials, every AWS SDK looks them up automatically. Done. It works because there is exactly one piece of code per instance.&lt;/p&gt;&lt;p&gt;On a Kubernetes node, you have many pods. They have different jobs. The S3-backup pod shouldn’t be able to read the database. The pod that talks to Stripe doesn’t need any AWS access at all. If you hang IAM on the node, every pod on that node gets every permission.&lt;/p&gt;&lt;p&gt;IAM Roles for Service Accounts (IRSA) is the EKS feature that fixes this. A few moving parts; once you see them once they are obvious.&lt;/p&gt;&lt;h3 id="the-pieces"&gt;The pieces&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;The cluster has an OIDC provider — when you create the cluster, EKS gives it a public JWKS endpoint.&lt;/li&gt;&lt;li&gt;You register that OIDC provider as an identity provider in IAM.&lt;/li&gt;&lt;li&gt;You create an IAM role with a trust policy that says “I trust tokens from this OIDC provider, but only if they claim to be service account X in namespace Y.”&lt;/li&gt;&lt;li&gt;You annotate the Kubernetes service account with the role ARN.&lt;/li&gt;&lt;li&gt;EKS’ admission webhook mounts a projected token into pods that use that service account, and sets the right environment variables so the AWS SDK picks it up.&lt;/li&gt;&lt;/ol&gt;&lt;h3 id="the-trust-policy"&gt;The trust policy&lt;/h3&gt;&lt;p&gt;The interesting one is step 3. The trust policy on the role looks like this:&lt;/p&gt;&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
      "Federated": "arn:aws:iam::1111…:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/EXAMPLED5…"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "oidc.eks.eu-west-1.amazonaws.com/id/EXAMPLED5…:sub": "system:serviceaccount:payments:s3-backup",
        "oidc.eks.eu-west-1.amazonaws.com/id/EXAMPLED5…:aud": "sts.amazonaws.com"
      }
    }
  }]
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;sub&lt;/code&gt; condition is the important one — it pins the role to one specific service account in one specific namespace. Without it, anything in the cluster could assume the role.&lt;/p&gt;&lt;h3 id="the-service-account"&gt;The service account&lt;/h3&gt;&lt;div class="language-yaml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3-backup
  namespace: payments
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::1111…:role/eks-payments-s3-backup&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;That annotation is all the Kubernetes side needs. The webhook handles the rest at pod-admission time:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Projects a service-account token with audience &lt;code class="highlighter-rouge"&gt;sts.amazonaws.com&lt;/code&gt; into &lt;code class="highlighter-rouge"&gt;/var/run/secrets/eks.amazonaws.com/serviceaccount/token&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;Sets &lt;code class="highlighter-rouge"&gt;AWS_ROLE_ARN&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/code&gt; in the pod environment.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The AWS SDK’s default credential chain knows to read that token and call &lt;code class="highlighter-rouge"&gt;sts:AssumeRoleWithWebIdentity&lt;/code&gt;, which gives the pod temporary credentials that rotate themselves.&lt;/p&gt;&lt;h3 id="a-terraform-snippet"&gt;A Terraform snippet&lt;/h3&gt;&lt;p&gt;In practice I would never click any of this in the console. Here is a compact Terraform module sketch that wires the whole thing up. The &lt;code class="highlighter-rouge"&gt;aws_iam_openid_connect_provider&lt;/code&gt; resource is created once per cluster; the role can be created per service account.&lt;/p&gt;&lt;div class="language-hcl highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;data "aws_eks_cluster" "main" { name = var.cluster_name }

data "tls_certificate" "oidc" {
  url = data.aws_eks_cluster.main.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "eks" {
  url             = data.aws_eks_cluster.main.identity[0].oidc[0].issuer
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.oidc.certificates[0].sha1_fingerprint]
}

data "aws_iam_policy_document" "assume_role" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    principals {
      type        = "Federated"
      identifiers = [aws_iam_openid_connect_provider.eks.arn]
    }
    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:${var.namespace}:${var.service_account}"]
    }
  }
}

resource "aws_iam_role" "sa" {
  name               = "eks-${var.namespace}-${var.service_account}"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Attach whatever managed or inline policies the workload actually needs to that role. The service account in Kubernetes is annotated with &lt;code class="highlighter-rouge"&gt;aws_iam_role.sa.arn&lt;/code&gt; and the rest is plumbing.&lt;/p&gt;&lt;h3 id="the-thing-people-forget"&gt;The thing people forget&lt;/h3&gt;&lt;p&gt;The AWS SDK on the pod also has to be recent enough to support the web-identity credential provider. Anything modern is fine. If you are using a very old SDK or an old version of the CLI that pre-dates IRSA, the SDK will fall back to the node IAM role, you will get inconsistent behavior, and you will lose half a day to it. I have lost the half day, you don’t have to.&lt;/p&gt;&lt;p&gt;Other than that — this is the single piece of EKS that I think every team should set up on day one and never look at again.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Kubernetes in 2026. What changed since the last post here.</title>
    <link href="https://artemstar.com/2026/01/12/kubernetes-2026-what-changed/"/>
    <id>https://artemstar.com/2026/01/12/kubernetes-2026-what-changed/</id>
    <updated>2026-01-12T00:00:00Z</updated>
    <published>2026-01-12T00:00:00Z</published>
    <content type="html">&lt;p&gt;The last serious Kubernetes post on this blog was in January 2018 — a CI/CD piece with GitLab and Helm. A lot has changed since. This is not a comprehensive changelog, just the short list of things that changed how the job actually feels.&lt;/p&gt;&lt;h3 id="the-control-plane-is-not-your-problem"&gt;The control plane is not your problem&lt;/h3&gt;&lt;p&gt;In 2018 you set up Kubernetes by running &lt;code class="highlighter-rouge"&gt;kubeadm&lt;/code&gt; on three VMs, hoping the etcd backup script worked, and then writing a runbook for the day a master went down. Today, on every cloud I touch, the control plane is a managed service. EKS, GKE Autopilot, AKS, DOKS, Linode LKE — you ask for a cluster, fifteen minutes later you have a cluster. The vendor handles the masters and etcd.&lt;/p&gt;&lt;p&gt;If you are still running the control plane yourself, you almost certainly have a reason — air-gapped environment, sovereignty, your own metal. If you don’t have a reason, stop doing it.&lt;/p&gt;&lt;h3 id="helm-is-not-the-only-answer"&gt;Helm is not the only answer&lt;/h3&gt;&lt;p&gt;I used Helm in the 2018 post and Helm is still around (now on v3, no more Tiller, thank goodness). It is fine. But the alternatives are real now:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Kustomize.&lt;/strong&gt; Ships with &lt;code class="highlighter-rouge"&gt;kubectl&lt;/code&gt; since 1.14. Overlays instead of templates. If you can read YAML, you can read a Kustomize overlay.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Helmfile / Argo CD applications-of-applications / Flux Kustomizations.&lt;/strong&gt; Whatever shape you want for managing the chart-of-charts problem.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;cdk8s.&lt;/strong&gt; Define your manifests in TypeScript or Python. Useful if you have repeated patterns that templating handles badly.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;For a simple service I default to Kustomize. For something with many parameters and consumers, Helm earns its keep.&lt;/p&gt;&lt;h3 id="gitops-is-just-how-this-works-now"&gt;GitOps is just how this works now&lt;/h3&gt;&lt;p&gt;In 2018 my CI pipeline ran &lt;code class="highlighter-rouge"&gt;kubectl apply&lt;/code&gt; from a runner. This worked until I had three clusters and four people, at which point it started to make me nervous.&lt;/p&gt;&lt;p&gt;The standard pattern now is a tool — Argo CD or Flux — that lives inside the cluster and watches a Git repository. You push a commit, the controller reconciles the cluster to match. The cluster is the only thing with credentials to itself; your CI doesn’t need them.&lt;/p&gt;&lt;p&gt;Side effects of doing this that I didn’t expect in 2018:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The cluster state is auditable from &lt;code class="highlighter-rouge"&gt;git log&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;Rollback is &lt;code class="highlighter-rouge"&gt;git revert&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;The disaster recovery story is “bring up a cluster, point the GitOps controller at the repo, wait.”&lt;/li&gt;&lt;/ul&gt;&lt;h3 id="node-autoscaling-without-cluster-autoscaler"&gt;Node autoscaling without Cluster Autoscaler&lt;/h3&gt;&lt;p&gt;On AWS, &lt;a href="https://karpenter.sh/"&gt;Karpenter&lt;/a&gt; replaced Cluster Autoscaler in most of my clusters. The model is different: instead of scaling node groups up and down, Karpenter looks at unscheduled pods and provisions exactly the right instance type. If your workload needs 4 vCPUs and 8&amp;nbsp;GB, you’ll get a &lt;code class="highlighter-rouge"&gt;c7i.large&lt;/code&gt; (or a Spot equivalent), not a node from a pre-baked pool.&lt;/p&gt;&lt;p&gt;The result is fewer empty nodes, faster scale-up, and a much smaller config file. Other clouds have started shipping similar things; on GKE this is roughly what Autopilot does for you.&lt;/p&gt;&lt;h3 id="ingress-is-now-gateway-api"&gt;Ingress is now Gateway API&lt;/h3&gt;&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;Ingress&lt;/code&gt; resource is still here and still works, but if you are starting fresh I would use the &lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Gateway API&lt;/a&gt;. It separates the roles cleanly: cluster operators manage &lt;code class="highlighter-rouge"&gt;Gateway&lt;/code&gt; resources (think “there is a load balancer here, with this TLS, on this port”), application teams manage &lt;code class="highlighter-rouge"&gt;HTTPRoute&lt;/code&gt; resources (think “my &lt;code class="highlighter-rouge"&gt;/api&lt;/code&gt; path goes to my Service”). This was always how Ingress wanted to be, but you had to express it with annotations.&lt;/p&gt;&lt;h3 id="observability-otel-everywhere"&gt;Observability: OTel everywhere&lt;/h3&gt;&lt;p&gt;In 2018 you picked one stack — Prometheus, Datadog, New Relic — and instrumented your code for it. In 2026 you instrument for &lt;a href="https://opentelemetry.io/"&gt;OpenTelemetry&lt;/a&gt; and configure where the data goes. The vendors all accept OTLP. You can keep your instrumentation when you move from one to the other.&lt;/p&gt;&lt;p&gt;The OpenTelemetry Collector is the routing layer. It runs as a DaemonSet, scrapes Prometheus targets, ingests OTLP from your apps, batches, transforms, exports. It is the closest thing to a default I would name today.&lt;/p&gt;&lt;h3 id="a-smaller-things"&gt;A few smaller things&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;code class="highlighter-rouge"&gt;kubectl&lt;/code&gt; has built-in JSON Path and &lt;code class="highlighter-rouge"&gt;--watch&lt;/code&gt; and many other things. &lt;code class="highlighter-rouge"&gt;kubectl get pods -w -o wide&lt;/code&gt; covers a lot of needs that used to require third-party tooling.&lt;/li&gt;&lt;li&gt;The &lt;a href="https://github.com/derailed/k9s"&gt;k9s&lt;/a&gt; TUI is genuinely good. If you live in a terminal, install it.&lt;/li&gt;&lt;li&gt;Secrets in plain YAML are still a bad idea, but &lt;code class="highlighter-rouge"&gt;ExternalSecrets&lt;/code&gt; + a real secret manager (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) is now an unremarkable setup.&lt;/li&gt;&lt;li&gt;The default container runtime is &lt;code class="highlighter-rouge"&gt;containerd&lt;/code&gt;, not &lt;code class="highlighter-rouge"&gt;docker&lt;/code&gt;. You almost never notice, except when you do.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That is the short list. If your mental model is the 2018 model, you can still navigate; you just look like you’re carrying a flip phone.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Docker multi-stage builds and BuildKit. Shaving off the megabytes.</title>
    <link href="https://artemstar.com/2025/11/28/docker-multistage-buildkit/"/>
    <id>https://artemstar.com/2025/11/28/docker-multistage-buildkit/</id>
    <updated>2025-11-28T00:00:00Z</updated>
    <published>2025-11-28T00:00:00Z</published>
    <content type="html">&lt;p&gt;There’s an earlier post here on Docker OS images from 2017. Most of it still holds. The interesting development since then is that two features — multi-stage builds and BuildKit — make small images much easier than they used to be.&lt;/p&gt;&lt;p&gt;This is a quick walk through both, because Dockerfiles in production still routinely skip them.&lt;/p&gt;&lt;h3 id="the-naive-version"&gt;The naïve version&lt;/h3&gt;&lt;p&gt;Take a Go web server. A first-pass Dockerfile usually looks like this:&lt;/p&gt;&lt;div class="language-docker highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;FROM golang:1.23
WORKDIR /src
COPY . .
RUN go build -o app ./cmd/server
EXPOSE 8080
CMD ["./app"]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;It works. &lt;code class="highlighter-rouge"&gt;docker build .&lt;/code&gt; produces an image. The image is about 950&amp;nbsp;MB. The actual binary is 14&amp;nbsp;MB. We are shipping the Go compiler, the standard library source, and a Debian userland to production, every time, because we needed them once during the build.&lt;/p&gt;&lt;h3 id="multi-stage-builds"&gt;Multi-stage builds&lt;/h3&gt;&lt;p&gt;Multi-stage builds let you define more than one &lt;code class="highlighter-rouge"&gt;FROM&lt;/code&gt; in a Dockerfile. Only the final stage becomes the image. The earlier stages are scratch pads. You can copy artifacts out of them.&lt;/p&gt;&lt;div class="language-docker highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;# Stage 1: build
FROM golang:1.23 AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /out/app ./cmd/server

# Stage 2: runtime
FROM gcr.io/distroless/static-debian12
COPY --from=build /out/app /app
EXPOSE 8080
ENTRYPOINT ["/app"]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The runtime stage is &lt;a href="https://github.com/GoogleContainerTools/distroless"&gt;distroless&lt;/a&gt; — no shell, no package manager, just glibc-less static binaries can live there. The final image is around 17&amp;nbsp;MB. Same binary, same behavior, 55× smaller.&lt;/p&gt;&lt;p&gt;A handful of details that matter:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;code class="highlighter-rouge"&gt;CGO_ENABLED=0&lt;/code&gt; — without this Go will dynamically link against the build image’s glibc and your binary won’t run in &lt;code class="highlighter-rouge"&gt;distroless/static&lt;/code&gt;. This catches people.&lt;/li&gt;&lt;li&gt;Copy &lt;code class="highlighter-rouge"&gt;go.mod&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;go.sum&lt;/code&gt; before the rest of the source, then run &lt;code class="highlighter-rouge"&gt;go mod download&lt;/code&gt;. This caches dependencies separately from your code so you don’t re-download them every time you change a Go file.&lt;/li&gt;&lt;li&gt;Pin the base image. &lt;code class="highlighter-rouge"&gt;golang:1.23&lt;/code&gt; is fine for a personal blog example. For production, pin to a digest.&lt;/li&gt;&lt;/ul&gt;&lt;h3 id="buildkit"&gt;BuildKit&lt;/h3&gt;&lt;p&gt;The classic Docker builder built each stage sequentially, top to bottom. BuildKit is the modern builder that ships with Docker by default since 23.0. It does a few things the old one didn’t:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Parallel stages — independent stages build at the same time.&lt;/li&gt;&lt;li&gt;Mount caches — you can mount a directory across builds for things like &lt;code class="highlighter-rouge"&gt;/root/.cache/go-build&lt;/code&gt; or &lt;code class="highlighter-rouge"&gt;/var/cache/apt&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;Mount secrets — pass a secret into a build without it ending up in a layer.&lt;/li&gt;&lt;li&gt;SSH forwarding for private dependencies, without baking your SSH key into a layer.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The mount cache is the one I would highlight if you have one minute. Here is how it looks for a Node build:&lt;/p&gt;&lt;div class="language-docker highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;# syntax=docker/dockerfile:1.6
FROM node:20 AS build
WORKDIR /src
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci
COPY . .
RUN npm run build&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;That &lt;code class="highlighter-rouge"&gt;--mount=type=cache&lt;/code&gt; gives the &lt;code class="highlighter-rouge"&gt;RUN&lt;/code&gt; step a persistent cache directory that survives across builds but does not end up in any image layer. On a CI runner this turns a 90-second &lt;code class="highlighter-rouge"&gt;npm ci&lt;/code&gt; into a 5-second one for unchanged dependencies.&lt;/p&gt;&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;# syntax=&lt;/code&gt; line at the top is not a comment — it tells BuildKit which Dockerfile frontend to use. Without it you don’t get the &lt;code class="highlighter-rouge"&gt;--mount&lt;/code&gt; flag and you’ll wonder why your Dockerfile errors out with a syntax error.&lt;/p&gt;&lt;h3 id="a-quick-checklist"&gt;A quick checklist&lt;/h3&gt;&lt;p&gt;Before you ship a Dockerfile, walk down this list:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Is the runtime image &lt;code class="highlighter-rouge"&gt;distroless&lt;/code&gt;, &lt;code class="highlighter-rouge"&gt;alpine&lt;/code&gt;, or &lt;code class="highlighter-rouge"&gt;scratch&lt;/code&gt;? If it has &lt;code class="highlighter-rouge"&gt;apt-get&lt;/code&gt; in it, why?&lt;/li&gt;&lt;li&gt;Is the dependency install step before the source copy, so it caches independently of your code?&lt;/li&gt;&lt;li&gt;Do you have a &lt;code class="highlighter-rouge"&gt;.dockerignore&lt;/code&gt;? Without one, &lt;code class="highlighter-rouge"&gt;COPY . .&lt;/code&gt; happily copies your &lt;code class="highlighter-rouge"&gt;node_modules&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;.git&lt;/code&gt; into the build context.&lt;/li&gt;&lt;li&gt;Is the final &lt;code class="highlighter-rouge"&gt;ENTRYPOINT&lt;/code&gt; a non-root user? If not, add &lt;code class="highlighter-rouge"&gt;USER 1000&lt;/code&gt;.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;None of this is new. It has all been in the Docker docs since around 2019. It is, however, still missing from most of the Dockerfiles I look at.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>A few git commands I wish I knew sooner.</title>
    <link href="https://artemstar.com/2025/10/14/git-bisect-worktree-sparse-checkout/"/>
    <id>https://artemstar.com/2025/10/14/git-bisect-worktree-sparse-checkout/</id>
    <updated>2025-10-14T00:00:00Z</updated>
    <published>2025-10-14T00:00:00Z</published>
    <content type="html">&lt;p&gt;A loose continuation of the Master Git series on this blog — about eight years late, but the topic still comes up.&lt;/p&gt;&lt;p&gt;Three subcommands worth using more often than most people do. None of them are new. All of them are easy to miss in the man pages.&lt;/p&gt;&lt;h3 id="git-bisect"&gt;&lt;code class="highlighter-rouge"&gt;git bisect&lt;/code&gt; — when did this break?&lt;/h3&gt;&lt;p&gt;Imagine the situation. Your test suite was green last month. It is red today. Somewhere between 200 commits ago and now, somebody (very possibly you) introduced the bug. You don’t want to read 200 diffs.&lt;/p&gt;&lt;p&gt;&lt;code class="highlighter-rouge"&gt;git bisect&lt;/code&gt; does a binary search through history for you. You tell it one bad commit and one good commit, and it checks out the midpoint. You run your test. You tell git whether the midpoint was good or bad. It narrows the range. Repeat until you have a single guilty commit.&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ git bisect start
$ git bisect bad           # current HEAD is broken
$ git bisect good v1.4.0   # last release I know was fine
Bisecting: 92 revisions left to test after this (roughly 7 steps)
[a1b2c3d…] some commit message
$ npm test                 # or whatever your check is
$ git bisect bad           # or good
…
$ git bisect reset&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;For 200 commits you need about 7-8 iterations. If your check is scriptable (and most are), you can let git run it for you with &lt;code class="highlighter-rouge"&gt;git bisect run &amp;lt;script&amp;gt;&lt;/code&gt;:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ git bisect start HEAD v1.4.0
$ git bisect run ./scripts/repro.sh
…
a1b2c3d is the first bad commit&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;That is the entire workflow. You start it, you walk away, you come back to a single commit hash. I have used this to track down regressions that I had been staring at for an hour and would have stared at for the rest of the day.&lt;/p&gt;&lt;h3 id="git-worktree"&gt;&lt;code class="highlighter-rouge"&gt;git worktree&lt;/code&gt; — checking out two branches at once&lt;/h3&gt;&lt;p&gt;The standard story: you’re working on a feature branch, your colleague pings you with “quick, can you check what production looks like in &lt;code class="highlighter-rouge"&gt;main&lt;/code&gt;?” You stash. You switch. You forget about the stash for three days.&lt;/p&gt;&lt;p&gt;&lt;code class="highlighter-rouge"&gt;git worktree&lt;/code&gt; lets a single repository have multiple branches checked out into multiple directories at the same time:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ git worktree add ../site-main main
Preparing worktree (checking out 'main')
HEAD is now at 8f3a2c… some commit&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;Now you have two directories. Your original one is still on your feature branch with all its untouched changes. &lt;code class="highlighter-rouge"&gt;../site-main&lt;/code&gt; has &lt;code class="highlighter-rouge"&gt;main&lt;/code&gt; checked out. Same Git history, same remotes, no stash, no tears. When you’re done:&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ git worktree remove ../site-main&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;I use this for three things mostly:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Reviewing a colleague’s pull request without disturbing my own work.&lt;/li&gt;&lt;li&gt;Running a long-running test on the old branch while I work on the new one.&lt;/li&gt;&lt;li&gt;Comparing builds — I have two different commits compiled at the same time and can diff their outputs.&lt;/li&gt;&lt;/ol&gt;&lt;h3 id="git-sparse-checkout"&gt;&lt;code class="highlighter-rouge"&gt;git sparse-checkout&lt;/code&gt; — only the bits you need&lt;/h3&gt;&lt;p&gt;This one is more situational, but when you need it, you really need it. If you work in a monorepo or any large repository, you can ask Git to only materialize a subset of paths in your working directory. The history is still complete, the objects still get fetched, but your filesystem only has what you asked for.&lt;/p&gt;&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ git clone --filter=blob:none --no-checkout git@github.com:org/mono.git
$ cd mono
$ git sparse-checkout init --cone
$ git sparse-checkout set services/payments libs/common
$ git checkout main&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;--filter=blob:none&lt;/code&gt; bit makes the clone a partial clone (blobs are fetched on demand). &lt;code class="highlighter-rouge"&gt;--cone&lt;/code&gt; mode restricts the patterns to directory prefixes, which is faster and almost always what you want. The result: a repo that pretends, for the purposes of your editor and shell, to only contain &lt;code class="highlighter-rouge"&gt;services/payments&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;libs/common&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;I will not pretend this is for everyone. If you work on a thirty-thousand-file monorepo, this turns a five-minute clone into a twenty-second one and saves you from having to teach your editor to ignore most of the disk.&lt;/p&gt;&lt;h3 id="that-is-it"&gt;That is it&lt;/h3&gt;&lt;p&gt;Three commands. None of them were invented in the last year. All of them have been quietly waiting in &lt;code class="highlighter-rouge"&gt;git --help&lt;/code&gt; for me to actually read it.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>From Disqus to giscus. Comments without the bloat.</title>
    <link href="https://artemstar.com/2025/09/02/from-disqus-to-giscus/"/>
    <id>https://artemstar.com/2025/09/02/from-disqus-to-giscus/</id>
    <updated>2025-09-02T00:00:00Z</updated>
    <published>2025-09-02T00:00:00Z</published>
    <content type="html">&lt;p&gt;When I put this blog back online, the very first thing I deleted was the Disqus loader on every page.&lt;/p&gt;&lt;p&gt;If you have not looked at a Disqus embed in a while, it has not improved. The script pulls in third-party JavaScript, a small army of trackers, ad iframes for users not paying for Disqus Pro, and a couple of network round trips before it draws anything. For a static site that loads in under 200&amp;nbsp;ms otherwise, this is just rude.&lt;/p&gt;&lt;h3 id="the-cleanup"&gt;The cleanup&lt;/h3&gt;&lt;p&gt;The original embed lived as a script block on every post. It looked like this, roughly:&lt;/p&gt;&lt;div class="language-html highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&amp;lt;div id="disqus_thread"&amp;gt;&amp;lt;/div&amp;gt;
&amp;lt;script&amp;gt;
  var disqus_config = function () {
    this.page.url = 'http://artemstar.com/...';
    this.page.identifier = '/...';
  };
  // lazy-load on scroll
&amp;lt;/script&amp;gt;
&amp;lt;noscript&amp;gt;Please enable JavaScript to view the comments…&amp;lt;/noscript&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;One regex pass and 57 of these were gone across 19 files. Removing things is a strangely satisfying step. I wish more of my work was removing things.&lt;/p&gt;&lt;h3 id="what-i-considered"&gt;What I considered&lt;/h3&gt;&lt;p&gt;I had a short list:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Disqus, fixed up.&lt;/strong&gt; Pay for Pro, remove the ads, accept that you are still loading a third-party SPA. No.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;&lt;code class="highlighter-rouge"&gt;utterances&lt;/code&gt;.&lt;/strong&gt; Open source, GitHub-issue-backed. Lovely, but the project is sleepy and giscus is essentially its successor.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;&lt;code class="highlighter-rouge"&gt;giscus&lt;/code&gt;.&lt;/strong&gt; Same idea as utterances — comments live in GitHub Discussions on a repo you control. Active project, good docs.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Cactus comments / Commento / IsSo.&lt;/strong&gt; Self-hosted. I do not want a database for this blog.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;No comments at all.&lt;/strong&gt; Honestly tempting. The signal-to-noise on most comment threads is bad. People who want to reach me can email.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I ended up between “no comments” and giscus and decided to leave the blog quiet for now. Giscus is what I would pick the moment I change my mind, so here is the setup that I would use, written down before I forget it.&lt;/p&gt;&lt;h3 id="giscus-in-five-minutes"&gt;giscus in five minutes&lt;/h3&gt;&lt;p&gt;The flow is:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Pick a public GitHub repo to hold the discussions. A dedicated one is fine, e.g. &lt;code class="highlighter-rouge"&gt;yourname/blog-comments&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;Enable Discussions on that repo.&lt;/li&gt;&lt;li&gt;Install the &lt;a href="https://github.com/apps/giscus"&gt;giscus GitHub App&lt;/a&gt; on it.&lt;/li&gt;&lt;li&gt;Go to &lt;a href="https://giscus.app/"&gt;giscus.app&lt;/a&gt;, fill in the repo and mapping options, copy the resulting &lt;code class="highlighter-rouge"&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag.&lt;/li&gt;&lt;li&gt;Drop the snippet into the comments section of your post template.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;The embed looks like this:&lt;/p&gt;&lt;div class="language-html highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&amp;lt;script src="https://giscus.app/client.js"
        data-repo="yourname/blog-comments"
        data-repo-id="R_kg…"
        data-category="General"
        data-category-id="DIC_kw…"
        data-mapping="pathname"
        data-reactions-enabled="1"
        data-emit-metadata="0"
        data-input-position="bottom"
        data-theme="light"
        data-loading="lazy"
        crossorigin="anonymous"
        async&amp;gt;
&amp;lt;/script&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;p&gt;The two settings that actually matter are &lt;code class="highlighter-rouge"&gt;data-mapping="pathname"&lt;/code&gt; (so a comment thread is tied to a URL path, not a page title that I might rewrite) and &lt;code class="highlighter-rouge"&gt;data-loading="lazy"&lt;/code&gt; (so the comments don’t load on every page view).&lt;/p&gt;&lt;h3 id="the-trade-offs"&gt;The trade-offs&lt;/h3&gt;&lt;p&gt;Things you give up by going from Disqus to giscus:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Anonymous comments.&lt;/strong&gt; Everyone needs a GitHub account. For a blog about Docker and Terraform that is roughly the same as the audience I had, so I don’t care. For a recipe blog, this would be a terrible choice.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Spam moderation.&lt;/strong&gt; Mostly not your problem now. It is just GitHub Discussions.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Email notifications.&lt;/strong&gt; Comes from GitHub. You can subscribe to a thread like any issue.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Comment portability.&lt;/strong&gt; You own a repository of Markdown discussions. If giscus ever disappears, you still have the data. With Disqus, the data was always theirs.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;And the obvious nice thing — the comments box no longer makes its own little Christmas of third-party requests. The blog loads at the same speed whether the section is rendered or not, because nothing happens until the user scrolls to it.&lt;/p&gt;&lt;p&gt;If I end up turning comments on again, the snippet above is what goes in. If not, fewer trackers in the world, mine or anyone else’s.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>How to test Terraform built-in functions locally.</title>
    <link href="https://artemstar.com/2018/03/03/terraform-test-functions/"/>
    <id>https://artemstar.com/2018/03/03/terraform-test-functions/</id>
    <updated>2018-03-03T00:00:00Z</updated>
    <published>2018-03-03T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Terraform has a bunch of &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#built-in-functions"&gt;built-in functions&lt;/a&gt; that allow to perform common operations when writing infrastructure code. Some of them are so common in many programming languages, that you can guess what they are for even without reading the documentation. For example, you’ll probably recognize the &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#length-list-"&gt;length()&lt;/a&gt; function which returns the number of elements in a given list or map, the &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#list-items-"&gt;list()&lt;/a&gt; which returns a list consisting of arguments given to a function and the &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#basename-path-"&gt;join()&lt;/a&gt; which joins a list with the delimiter for a resulting string. You can look through the whole list of these functions in the &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#built-in-functions"&gt;documenation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There is a question though. How do you try out these built-in functions locally and see how they work? In this post, I’m going to show a couple of ways you can do that… &lt;!--break--&gt;&lt;/p&gt;
&lt;h3 id="terraform-console"&gt;Terraform console&lt;/h3&gt;
&lt;p&gt;Many programming language such as Python and Ruby provide an &lt;strong&gt;interactive console&lt;/strong&gt; as a quick way to try out commands and test pieces of code without creating a file.&lt;/p&gt;
&lt;p&gt;Terraform has an interactive console, too. But this console is sort of limited in functionality. It only allows to test &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html"&gt;interpolations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Using &lt;code class="highlighter-rouge"&gt;interpolations&lt;/code&gt; you can insert into strings different values which stay unknown before the terraform starts to run. For example, you will often interpolate the value of an input variable into the resource configuration:&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;resource "aws_s3_bucket" "main" {
  bucket = "${var.bucket_name}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The syntax for interpolation is similar what you can meet in other programming languages. The interpolated value is put inside the curly braces and prefixed with a dollar sign like this &lt;code class="highlighter-rouge"&gt;${var.bucket_name}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;terraform console&lt;/code&gt; simply creates the interpolation environment. Everything you type into the console is effectively the same as putting it inside &lt;code class="highlighter-rouge"&gt;${}&lt;/code&gt; in your configuration files. For example, if you launch a terraform console and type a math expression like &lt;code class="highlighter-rouge"&gt;1 + 2&lt;/code&gt;:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform console
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 1 + 2
3
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;You will get &lt;code class="highlighter-rouge"&gt;3&lt;/code&gt; in the output. This means that you can wrap the &lt;code class="highlighter-rouge"&gt;1 + 2&lt;/code&gt; expression inside &lt;code class="highlighter-rouge"&gt;${}&lt;/code&gt;, put it in your configuration file and be confident that this will be evaluated to &lt;code class="highlighter-rouge"&gt;3&lt;/code&gt;. For example, the following will create &lt;code class="highlighter-rouge"&gt;3&lt;/code&gt; S3 buckets in AWS Cloud:&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;resource "aws_s3_bucket" "main" {
  count  = "{1 + 2}"
  bucket = "test-bucket-${count.index}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Now that we understand what functionality &lt;code class="highlighter-rouge"&gt;terraform console&lt;/code&gt; provides, we can use it to test interpolating the built-in functions.&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;$ terraform console
&amp;gt; list("hello", "world")
[
  hello,
  world
]
&amp;gt; upper("hello world")
HELLO WORLD
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;But remember, because everything you type will be immediately evaluted, you can’t create variables in interactive console. Thus, if you need to pass a map or a list to your function, you’ll need to create this map or list using &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#map-key-value-"&gt;map()&lt;/a&gt; and &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#list-items-"&gt;list()&lt;/a&gt; functions respectively:&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&amp;gt; length(list("hello", "world"))
2
&amp;gt; lookup(map("id", "1","message", "hello world" ), "id")
1
&amp;gt; lookup(map("id", "1","message", "hello world" ), "message")
hello world
&amp;gt; lookup(map("id", "1","message", "hello world" ), "author", "None")
None
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If you intentionally mistype a function name, you can see the inner workings of terraform console. It wraps everything you type inside &lt;code class="highlighter-rouge"&gt;${}&lt;/code&gt; and then evaluates the expression:&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&amp;gt; lengt(list("hello", "world"))
1:3: unknown function called: lengt in:

${lengt(list("hello", "world"))}  &amp;lt;-- the function gets put inside ${}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h3 id="output-variables"&gt;Output variables&lt;/h3&gt;
&lt;p&gt;If you miss creating variables and find working with the interactive console confusing, you can try out the built-in functions using output variables.&lt;/p&gt;
&lt;p&gt;Create a configuration file for testing. Then define any variables you want with a &lt;code class="highlighter-rouge"&gt;variable&lt;/code&gt; block and put the expression you want to test inside definition of an &lt;a href="https://www.terraform.io/intro/getting-started/outputs.html"&gt;output variable&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To demonstrate the same examples with the &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#length-list-"&gt;length()&lt;/a&gt; and &lt;a href="https://www.terraform.io/docs/configuration/interpolation.html#lookup-map-key-default-"&gt;lookup()&lt;/a&gt; function using a new approach, I will create the following &lt;code class="highlighter-rouge"&gt;test.tf&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;variable "my_list" {
  default = ["hello", "world"]
}

variable "my_map" {
  default = {
    id      = "1"
    message = "hello world"
  }
}

## functions to test
output "my_list_test1" {
  value = "${length(var.my_list)}"
}

output "my_map_test1" {
  value = "${lookup(var.my_map, "id")}"
}

output "my_map_test2" {
  value = "${lookup(var.my_map, "message")}"
}

output "my_map_test3" {
  value = "${lookup(var.my_map, "author", "None")}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If I run &lt;code class="highlighter-rouge"&gt;terraform apply&lt;/code&gt;, I will get the same results as before:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply
Outputs:

my_list_test1 &lt;span class="o"&gt;=&lt;/span&gt; 2
my_map_test1 &lt;span class="o"&gt;=&lt;/span&gt; 1
my_map_test2 &lt;span class="o"&gt;=&lt;/span&gt; hello world
my_map_test3 &lt;span class="o"&gt;=&lt;/span&gt; None
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;P.S. you can find the &lt;code class="highlighter-rouge"&gt;test.tf&lt;/code&gt; file in &lt;a href="https://github.com:yourname/terraform-local-test"&gt;this GitHub repo&lt;/a&gt;.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>How to build a CI/CD pipeline using Kubernetes, Gitlab CI, and Helm.</title>
    <link href="https://artemstar.com/2018/01/15/cicd-with-kubernetes-and-gitlab/"/>
    <id>https://artemstar.com/2018/01/15/cicd-with-kubernetes-and-gitlab/</id>
    <updated>2018-01-15T00:00:00Z</updated>
    <published>2018-01-15T00:00:00Z</published>
    <content type="html">
&lt;p&gt;In today’s post I want to share an example of a CI/CD pipeline I created for my test application using very popular nowadays orchestrator Kubernetes (k8s) and Gitlab CI.&lt;/p&gt;
&lt;h3 id="deploy-a-kubernetes-cluster"&gt;Deploy a Kubernetes cluster&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; if you plan to follow my steps make sure to change domain name in the &lt;code class="highlighter-rouge"&gt;my-cluster/dns.tf&lt;/code&gt; config file and make appropriate changes in the name server configuration for your domain.&lt;/p&gt;
&lt;p&gt;I’m going to use my &lt;a href="https://github.com:yourname/terraform-kubernetes"&gt;terraform-kubernetes&lt;/a&gt; repository to quickly deploy a Kubernetes cluster with 3 worker nodes (2 for running my applications and one for Gitlab CI) to Google Cloud Platform.&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ./my-cluster
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;!--break--&gt;&lt;p&gt;If terraform ran successfully, you’ll see at the end a &lt;a href="https://cloud.google.com/sdk/gcloud/"&gt;gcloud&lt;/a&gt; command which you need to run to configure access to the created cluster with &lt;a href="https://kubernetes.io/docs/reference/kubectl/overview/"&gt;kubectl&lt;/a&gt;.&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;gcloud container clusters get-credentials my-cluster &lt;span class="nt"&gt;--zone&lt;/span&gt; europe-west1-b &lt;span class="nt"&gt;--project&lt;/span&gt; example-123456
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;You can check if kubectl is configured correctly by trying to check the server version of k8s:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl version
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h3 id="configure-kubernetes-cluster"&gt;Configure Kubernetes cluster&lt;/h3&gt;
&lt;p&gt;Next, I will create namespaces for the services I’m going to run in my cluster. I’ll create 2 namespaces for different stages of my example application called &lt;code class="highlighter-rouge"&gt;raddit&lt;/code&gt; and one namespace for running Gitlab CI. All of my cluster specific k8s configuration is contained in &lt;code class="highlighter-rouge"&gt;my-cluster&lt;/code&gt; directory under subdirectory called &lt;code class="highlighter-rouge"&gt;k8-config&lt;/code&gt;:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./k8s-config/env-namespaces
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I will also create a &lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/"&gt;storage class&lt;/a&gt; to use in &lt;a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/"&gt;dynamic volume provisioning&lt;/a&gt; for the &lt;code class="highlighter-rouge"&gt;mongodb&lt;/code&gt; service which I’m going to deploy later:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./k8s-config/storage-classes/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h3 id="deploy-gitlab-ci-and-kube-lego"&gt;Deploy Gitlab CI and Kube-Lego&lt;/h3&gt;
&lt;p&gt;Now, I’m going to deploy &lt;a href="https://github.com/jetstack/kube-lego"&gt;kube-lego&lt;/a&gt; for automatic handling of &lt;a href="https://letsencrypt.org/"&gt;Let’s Encrypt&lt;/a&gt; SSL certificates:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./k8s-config/kube-lego
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;After that, we’ll deploy Gitlab CI using &lt;a href="https://docs.gitlab.com/ce/install/kubernetes/gitlab_omnibus.html"&gt;gitlab-omnibus helm chart&lt;/a&gt; with slight customizations made. Make sure to change the IP address to the one from terraform output and use your own domain name:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm init &lt;span class="c"&gt;# initialize Helm&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;helm install &lt;span class="nt"&gt;--name&lt;/span&gt; gitlab &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--namespace&lt;/span&gt; infra &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;baseIP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;104.155.31.111 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;baseDomain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ci.devops-by-practice.fun &lt;span class="se"&gt;\&lt;/span&gt;
./k8s-config/charts/gitlab-omnibus
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;It’s going to take about 5-6 minutes for Gitlab CI to come up. You can check the status of the pods with the command below:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; infra
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h3 id="create-a-new-group-of-projects"&gt;Create a new group of projects&lt;/h3&gt;
&lt;p&gt;Once Gitlab CI is up and running, it should be accessible at &lt;code class="highlighter-rouge"&gt;gitlab.ci.&amp;lt;your-domain&amp;gt;&lt;/code&gt; (use &lt;code class="highlighter-rouge"&gt;root&lt;/code&gt; name to log in).&lt;/p&gt;
&lt;p&gt;In Gitlab UI, I’ll create a new group for my &lt;a href="https://github.com:yourname/kubernetes-gitlab-example"&gt;raddit&lt;/a&gt; application:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/grou.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;Gitlab CI groups allow to group related projects. In our example, &lt;code class="highlighter-rouge"&gt;raddit&lt;/code&gt; group will contain the microservices that the raddit application consists of.&lt;/p&gt;
&lt;p&gt;Now I will clone a repository with the raddit application:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone git@github.com:yourname/kubernetes-gitlab-example.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;And create a new project in Gitlab CI web UI for each component of raddit application:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/prjcts.png" alt="400x400"&gt;&lt;/p&gt;
&lt;h3 id="describe-a-cicd-pipeline-for-each-project"&gt;Describe a CI/CD pipeline for each project&lt;/h3&gt;
&lt;p&gt;Each component of the raddit application is contained in its own repository and has its own CI/CD pipeline defined in a &lt;code class="highlighter-rouge"&gt;.gitlab-ci-yml&lt;/code&gt; file (which has a special meaning for Gitlab CI) stored in the root of each of the component’s directory.&lt;/p&gt;
&lt;p&gt;Let’s have a look at the &lt;strong&gt;ui&lt;/strong&gt; service pipeline. Because the pipeline file is long, I’ll break it into pieces and comment on each one of them.&lt;/p&gt;
&lt;p&gt;First, we define stages in our pipeline and environment variables. The env vars will be set by Gitlab Runner before running each job:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pipe-ui1.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;Although Gitlab CI has its own container registry, in this example, I’m going to use Docker Hub, so if you’re following along, apart from &lt;code class="highlighter-rouge"&gt;GKE info&lt;/code&gt; variables, you’ll need to make sure the &lt;code class="highlighter-rouge"&gt;CONTAINER_IMAGE&lt;/code&gt; variable is set according to your Docker Hub account name.&lt;/p&gt;
&lt;p&gt;Also, don’t forget to change the &lt;code class="highlighter-rouge"&gt;DOMAIN_NAME&lt;/code&gt; variable.&lt;/p&gt;
&lt;p&gt;Now let’s go to the pipeline’s stages.&lt;/p&gt;
&lt;p&gt;In the first (&lt;strong&gt;build&lt;/strong&gt;) stage, we build ui application container, tag it by the name of the branch and commit hash and push it to the Docker Hub registry. Then in the &lt;strong&gt;test&lt;/strong&gt; stage we can run tests for our application which I have none in this example:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pipe-ui2.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;In the following &lt;strong&gt;release&lt;/strong&gt; stage, we assign a version tag to the docker images that passed the tests successfully:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pipe-ui3.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;The next &lt;strong&gt;deploy&lt;/strong&gt; stage is split into 2 jobs. The differences are very small: I don’t enable ingress for the service running in &lt;code class="highlighter-rouge"&gt;staging&lt;/code&gt; and the deployment to &lt;code class="highlighter-rouge"&gt;production&lt;/code&gt; is &lt;code class="highlighter-rouge"&gt;manual&lt;/code&gt;. The deployment is done using &lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt; which is a package manager for k8s. You can find helm charts for raddit’s microservices in the root of their directories under the &lt;code class="highlighter-rouge"&gt;charts&lt;/code&gt; subdirectory.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pipe-ui4.png" alt="400x400"&gt;&lt;/p&gt;
&lt;h3 id="launch-pipeline-for-each-service"&gt;Launch pipeline for each service&lt;/h3&gt;
&lt;p&gt;The pipeline is already described for each service, but you need to change some env vars as I mentioned above in each of the &lt;code class="highlighter-rouge"&gt;.gilab-ci.yml&lt;/code&gt; files.&lt;/p&gt;
&lt;p&gt;To launch a pipeline for a service, we first need to define some secret variables. Click on &lt;strong&gt;post&lt;/strong&gt; project, for example, and go to &lt;code class="highlighter-rouge"&gt;Settings -&amp;gt; CI/CD&lt;/code&gt;. Define &lt;code class="highlighter-rouge"&gt;CI_REGISTRY_USER&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;CI_REGISTRY_PASSWORD&lt;/code&gt; variables to allow logging to Docker Hub.&lt;/p&gt;
&lt;p&gt;Also, define a &lt;code class="highlighter-rouge"&gt;service_account&lt;/code&gt; variable to allow Gitlab to deploy to your k8s cluster. To get a value for a &lt;code class="highlighter-rouge"&gt;service_account&lt;/code&gt; variable just run &lt;code class="highlighter-rouge"&gt;terraform init&lt;/code&gt; and &lt;code class="highlighter-rouge"&gt;terraform apply&lt;/code&gt; in the &lt;a href="https://github.com:yourname/terraform-kubernetes/tree/master/accounts/service-accounts"&gt;accounts/service-accounts&lt;/a&gt; directory and copy a value from the output.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/secrets.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;Define the same variables for &lt;strong&gt;ui&lt;/strong&gt; and &lt;strong&gt;mongodb&lt;/strong&gt; projects.&lt;/p&gt;
&lt;p&gt;Now you can push each services folder to the Gitlab repository and the pipeline should start automatically.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pipeline.png" alt="400x400"&gt;&lt;/p&gt;
&lt;h3 id="accessing-the-raddit-application"&gt;Accessing the Raddit application&lt;/h3&gt;
&lt;p&gt;In the staging namespace you can access the application by using port-forwarding:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/pf.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/staging.png" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;and the application deployed into production should be accessible by the domain name via HTTP and HTTPS. Note that it can take up to 5-10 minutes after the first deployment for the application to be reachable by the domain name, because it takes some time to provision a google load balancer:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/k8s-gitlab/prod.png" alt="400x400"&gt;&lt;/p&gt;
&lt;h3 id="destroy-the-playground"&gt;Destroy the playground&lt;/h3&gt;
&lt;p&gt;After you’re done playing with Gitlab CI and k8s and wish to delete the GCP resources that you’ve created, run the following commands:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;ls&lt;/span&gt; | cut &lt;span class="nt"&gt;-f1&lt;/span&gt; | tail &lt;span class="nt"&gt;-n&lt;/span&gt; +2 | xargs helm delete &lt;span class="nt"&gt;--purge&lt;/span&gt; &lt;span class="c"&gt;# delete all the deployed charts&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform destroy &lt;span class="c"&gt;# destroy resources created by terraform&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;</content>
  </entry>
  <entry>
    <title>How to visualize your workflow with GitHub projects using AWS Lambda.</title>
    <link href="https://artemstar.com/2017/08/12/aws-lambda-github-bot/"/>
    <id>https://artemstar.com/2017/08/12/aws-lambda-github-bot/</id>
    <updated>2017-08-12T00:00:00Z</updated>
    <published>2017-08-12T00:00:00Z</published>
    <content type="html">
&lt;h3 id="the-problem"&gt;The problem&lt;/h3&gt;
&lt;p&gt;In our company, we use GitHub for source control of our projects. We have tens of different GitHub repos and almost every one of them has outstanding issues and pull requests (PRs) and as the number of projects grows, it becomes very difficult to manage our work on those projects. Although, we receive notifications about new issues and PRs in our chat, they are not organized. We clearly needed a central place to store and visualize all our issues and PRs, so that we could see the problems we have and prioritize our work&lt;/p&gt;
&lt;p&gt;GitHub has &lt;a href="https://help.github.com/articles/about-project-boards/"&gt;project boards&lt;/a&gt; that allows you to create Kanban boards for your GitHub issues and PRs. But the problem with this is that the process of adding new issues and PRs to a project board is manual. And we clearly didn’t want to go to GitHub and add a new card to the board for every new issue we receive in our chat.&lt;/p&gt;
&lt;p&gt;Thus, we decided to automate this process and create a simple GitHub bot using AWS Lambda. &lt;!--break--&gt;&lt;/p&gt;
&lt;h3 id="github-marketplace"&gt;GitHub marketplace&lt;/h3&gt;
&lt;p&gt;In the GitHub &lt;a href="https://github.com/marketplace/category/project-management"&gt;marketplace&lt;/a&gt;, I found some project management solutions that could solve our problem. But they literally seemed to cost a fortune.&lt;/p&gt;
&lt;p&gt;For example, &lt;a href="https://www.zenhub.com/"&gt;ZenHub&lt;/a&gt; creates a board for each repo with all its issues and PRs. It also provides the functionality of placing multiple repositories’ issues and PRs &lt;a href="https://www.zenhub.com/blog/multi-repo-boards-have-arrived/"&gt;on a single board&lt;/a&gt;. However, because boards are created per repository, to see all your issues and PRs across all of your repositories, you would have to go to some repo and merge all your repos to that repo’s board, which is kind of not very convenient and what we wanted.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codetree.com/"&gt;Codetree&lt;/a&gt; allows your to create a board for opened issues and PRs of multiple repositories and it seemed like it would do for our case, but again the pricing was way too high for our needs.&lt;/p&gt;
&lt;p&gt;There is also some other project managements tools like &lt;a href="https://github.com/marketplace/waffle"&gt;Waffle&lt;/a&gt; and &lt;a href="https://github.com/marketplace/zube"&gt;Zube&lt;/a&gt;, but they all cost too much. I believe those could be handy for teams who have projects under intense development with hundreds or even thousands of issues and PRs. However, we don’t have any such projects, but we do have plenty of projects some of which have issues and it still requires proper visualization and management.&lt;/p&gt;
&lt;p&gt;Besides, it seems like GitHub already has the required functionality in place - the &lt;a href="https://help.github.com/articles/about-project-boards/"&gt;project boards&lt;/a&gt;. It only required a bit of automation to make the flow of project cards (issues or PRs) across the board visible.&lt;/p&gt;
&lt;h3 id="aws-lambda"&gt;AWS Lambda&lt;/h3&gt;
&lt;p&gt;AWS Lambda is a computing service provided by Amazon Web Services (AWS) that lets you create &lt;em&gt;serverless applications&lt;/em&gt;. With a serverless model, you don’t have to worry about provisioning or managing servers (AWS does it for you), and can simply focus on writing your code. Sounds like a dream for any developer, right? :)&lt;/p&gt;
&lt;p&gt;In AWS Lambda, you create a function which will be run in response to certain types of events. It is tightly integrated with other AWS services, so you may choose the events to come from many different sources like S3, CloudWatch, SNS, etc.&lt;/p&gt;
&lt;p&gt;In our case, we used GitHub integration with Amazon SNS and then made a Lambda function listen to the topic to which the events were pushed. We’ll talk more about it in a bit.&lt;/p&gt;
&lt;p&gt;A great thing about Lambda is that you pay only for the time your code is run and the amount of memory your function uses. Besides, the Lambda includes 1M free requests per month and 400,000 GB-seconds of compute time per month.&lt;/p&gt;
&lt;p&gt;With the current rate of invocations of our 128 MB Lambda function, we use the service for free!&lt;/p&gt;
&lt;p&gt;Compared to the paid solutions on GitHub marketplace, we save at least 50 dollars a month.&lt;/p&gt;
&lt;h3 id="githubotik"&gt;Githubotik&lt;/h3&gt;
&lt;p&gt;It’s about time we talk about the Lambda function itself.&lt;/p&gt;
&lt;p&gt;I wanted to create something very simple. I wrote &lt;a href="https://github.com:yourname/aws-lambda-githubbot/tree/master/githubotik/github_functions"&gt;a python module&lt;/a&gt; to interact with the GitHub API the way I needed. The set of functions that this module provides allowed me to perform different actions with the project cards: creating them, moving across the board, deleting them.&lt;/p&gt;
&lt;p&gt;Then I started to think about the Lambda function itself. But before I started to write the code, I had to think about what workflow with GitHub issues and PRs we needed. The idea that I had in mind and which we in the end decided to implement was simple:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;We have one project board called Backlog for opened issues and PRs across all our repositories. The Backlog has 3 columns: TODO, WIP, DONE. For each opened (or reopened) issue or PR a card is created on the board in the TODO column. If someone from our team takes to work on an issue (or PR if it takes too long to merge), he assigns that issue to himself and a card that represents that issue on the Backlog board is moved to the WIP column which stands for “work in progress”. When an issue gets closed or a PR gets merged/closed, its card is moved to the DONE column. From the DONE column we would remove the cards manually.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As you can see from the description, our project board was going to be a simple Kanban board for visualizing our workflow with GitHub projects and which would be managed mostly automatically by the Lambda function.&lt;/p&gt;
&lt;p&gt;Then I created a Lambda function that would implement the described workflow. I will show a piece of that function. The whole version you can see in my &lt;a href="https://github.com:yourname/aws-lambda-githubbot/blob/master/githubotik/githubotik.py"&gt;repo&lt;/a&gt;:&lt;/p&gt;
&lt;div class="language-python highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;config.config_loader&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConfigLoader&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;github_functions.githubclient&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;GithubClient&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;json&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ConfigLoader&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;github&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;GithubClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'org'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'token'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'media_type'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="n"&gt;gitevent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Records'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'Sns'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'Message'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'action'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"opened"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s"&gt;'pull_request'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add_pull_request_card&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'project'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'column_for_open'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'repository'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'pull_request'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'labels'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add_issue_card&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'project'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'column_for_open'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'repository'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'issue'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'number'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="n"&gt;gitevent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'issue'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'labels'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;You can see here, we first load the &lt;code class="highlighter-rouge"&gt;ConfigLoader&lt;/code&gt; module. It’s another module that I created to load the configuration settings such as the name of the organization, the name of the project board and its columns, token and media type for talking to GitHub API. This ways all the settings that determine the behavior of the Lambda function are stored in a &lt;a href="https://github.com:yourname/aws-lambda-githubbot/blob/master/githubotik/config/config.json"&gt;single json file&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Then in the &lt;code class="highlighter-rouge"&gt;lambda_handler&lt;/code&gt;, which is the actual Lambda function code that will be executed in response to events, we load the configuration, create an instance of GitHubClient class to talk to GitHub API and load the event from SNS topic. Next we look for different fields in the event, which is basically just a JSON object, and if those fields correspond to the activities that we’re waiting for, then we act according to our workflow, i.e. creating or moving the cards across the board.&lt;/p&gt;
&lt;h3 id="example"&gt;Example&lt;/h3&gt;
&lt;p&gt;I’ll show you how you can create a simple Github bot to vizualize your GitHub workflow using code in this &lt;a href="https://github.com:yourname/aws-lambda-githubbot"&gt;repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first thing we’re going to do is to define the variables in the configuration file that we’re loading as part of the Lambda function.&lt;/p&gt;
&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"org"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s2"&gt;"githubotik-inc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"project"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Backlog"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"column_for_open"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TODO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"column_in_progress"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WIP"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"column_for_closed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DONE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"token"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"XXXXXXXXXXXXXXXXXXXXXXXXX"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"help wanted"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"media_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"application/vnd.github.inertia-preview+json"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Most of the settings are pretty much self-explanatory, although I will make a few comments about some of them.&lt;/p&gt;
&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;media type&lt;/code&gt; should be passed in the &lt;code class="highlighter-rouge"&gt;Accept&lt;/code&gt; header of all API calls we make, because the &lt;a href="https://developer.github.com/v3/projects/"&gt;Projects API&lt;/a&gt; is currently in the preview period. So this setting may need to be changed in the future.&lt;/p&gt;
&lt;p&gt;The &lt;code class="highlighter-rouge"&gt;token&lt;/code&gt; is the GitHub token of a user in your organization on which behalf the automatic actions will be performed. In our organization, it’s a special user &lt;code class="highlighter-rouge"&gt;express42-bot&lt;/code&gt;. And in the actual Lambda function we use for ourselves, we don’t specify the &lt;code class="highlighter-rouge"&gt;token&lt;/code&gt; in the config file, but set it as an environment variable and use KMS encryption with our own key, which technically makes the functions cost us 1 dollar a month :)&lt;/p&gt;
&lt;p&gt;You can also specify &lt;code class="highlighter-rouge"&gt;labels&lt;/code&gt; that you want to apply to newly made issues and PRs. Although we decided that we didn’t need that and turned it off, the functionality is there so you can turn it on if you like.&lt;/p&gt;
&lt;p&gt;Next, we will create the &lt;code class="highlighter-rouge"&gt;Backlog&lt;/code&gt; project board.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/backlog-board.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Now we can move on to create an SNS topic in AWS Management Console:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/sns-topic.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Make sure you copy the topic’s ARN, we will need it later.&lt;/p&gt;
&lt;p&gt;To allow GitHub to send events to our SNS topic, we need to provide credentials when setting GitHub repo’s integration with Amazon SNS. Therefore, as our next step, we create a new user in IAM:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/create-user.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Make sure your copy the &lt;em&gt;access&lt;/em&gt; and &lt;em&gt;secret&lt;/em&gt; keys.&lt;/p&gt;
&lt;p&gt;As a good security practice, we create a role for our new user giving him only the access he needs. We create a custom inline policy for our new user which will give him rights to publish to the SNS topic we created earlier.&lt;/p&gt;
&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"sns:Publish"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;SNS topic ARN&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;While working with IAM, we also create an AWS Lambda service role for our Lambda function&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/lambda-role1.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;and attach &lt;code class="highlighter-rouge"&gt;AWSLambdaBasicExecutionRole&lt;/code&gt; managed policy to this role. This will allow our function to write logs to CloudWatch.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/lambda-role2.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;finally we’ll give it a name:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/lambda-role3.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Now we add an integration with Amazon SNS to one of our organization repositories. To do that, we’ll use a &lt;a href="https://github.com:yourname/aws-lambda-githubbot/blob/master/create_hook.py"&gt;create_hook script&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Don’t try to add the integration manually, because by default GitHub will only send information about &lt;em&gt;push events&lt;/em&gt; to the topic and GitHub UI currently doesn’t allow you to configure it otherwise, so we have to it via API. And anyway, with this script you can add integration to multiple repos with just one command.&lt;/p&gt;
&lt;p&gt;But before we can use it, we need to define some variables.&lt;/p&gt;
&lt;div class="language-python highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="n"&gt;TOKEN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="c"&gt;# Github token&lt;/span&gt;
&lt;span class="n"&gt;ORG_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"githubotik-inc"&lt;/span&gt; &lt;span class="c"&gt;# organization name on Github&lt;/span&gt;

&lt;span class="c"&gt;## SNS integration configuration&lt;/span&gt;
&lt;span class="n"&gt;AWS_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;
&lt;span class="n"&gt;AWS_SECRET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;
&lt;span class="n"&gt;SNS_TOPIC&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="c"&gt;# this should be ARN of a topic&lt;/span&gt;
&lt;span class="n"&gt;SNS_REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="c"&gt;# region where sns topic was created&lt;/span&gt;
&lt;span class="c"&gt;# see all possible types of events here (https://api.github.com/hooks), look for amazonsns&lt;/span&gt;
&lt;span class="n"&gt;EVENTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"issues"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"pull_request"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;We need to provide a GitHub &lt;code class="highlighter-rouge"&gt;TOKEN&lt;/code&gt; to be able talk to GitHub API. As we’re going to change the repository’s settings, the token has to be provided by someone in your team who has access to the those settings.&lt;/p&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;AWS_KEY&lt;/code&gt;,&lt;code class="highlighter-rouge"&gt;AWS_SECRET&lt;/code&gt;,&lt;code class="highlighter-rouge"&gt;SNS_TOPIC&lt;/code&gt;, &lt;code class="highlighter-rouge"&gt;SNS_REGION&lt;/code&gt; variables are values we need to provide to configure a repository integration with Amazon SNS.&lt;/p&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;EVENTS&lt;/code&gt; indicates what type of events we want GitHub to send to SNS topic.&lt;/p&gt;
&lt;p&gt;After we defined the variables, we now configure our repository with a simple command like this:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python create_hook.py nginx-cookbook
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Btw, we can provide multiple arguments to this command to configure multiple repositories.&lt;/p&gt;
&lt;p&gt;You can now open a repository’s page in your browser to make sure it’s configured correctly.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/repo-integration.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;We finally come to the last step which is creating a Lambda function in AWS :)&lt;/p&gt;
&lt;p&gt;In order to create a Lambda function in AWS, we need to provide a zip archive of our Lambda function code, as well as all its dependencies.&lt;/p&gt;
&lt;p&gt;In this step, we could use some advanced ways to create a Lambda function. For example, the &lt;a href="https://serverless.com/"&gt;Serverless&lt;/a&gt; framework looks pretty cool. However, in our simple case, I think it would be unnecessary, as it requires time to install the framework on your machine and learn it. So I decided to go with a good old &lt;a href="https://github.com:yourname/aws-lambda-githubbot/blob/master/Makefile"&gt;Makefile&lt;/a&gt; :)&lt;/p&gt;
&lt;p&gt;In fact, I took the idea with the Makefile from this &lt;a href="https://www.youtube.com/watch?v=68teS9nNvPQ"&gt;video&lt;/a&gt; and it seemed like it would fit perfectly for our project.&lt;/p&gt;
&lt;p&gt;When you’re done with the code, simply run two commands to package your code and dependencies:&lt;/p&gt;
&lt;div class="language-bash highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;make  &lt;span class="c"&gt;# create a virtual env&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;make build  &lt;span class="c"&gt;# package a function in zip format&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The zip archive will be created in the following path &lt;code class="highlighter-rouge"&gt;package/githubotik.zip&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Let’s go now to AWS Management Console once again and create a Lambda function.&lt;/p&gt;
&lt;p&gt;Go to AWS Lambda service page and look for &lt;code class="highlighter-rouge"&gt;sns&lt;/code&gt; blueprint.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda1.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Add sns topic we created earlier as the trigger and enable it:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda2.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Set the name of the function and choose the runtime environment:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda3.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Choose to upload a zip package and upload our function:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda4.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Fill in the &lt;code class="highlighter-rouge"&gt;Handler&lt;/code&gt; field according to the instructions and select the role for our function:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda5.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Click advanced settings to configure the memory usage and timeout. For our simple function, 128 MB of memory will be more than enough and it hardly ever takes to run longer than 5 seconds.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/githubotik/aws-lambda6.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;That’s it. Now let’s try it out! In the video below, I’ll demonstrate how the work with issues and PRs is now visualized on the project board: &lt;br&gt;&lt;br&gt; &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/UU3TM-hG9tg" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;h3 id="conclusion"&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Another problem that we faced when we started to use this Github bot was that we had to put all our existing issues and PRs on the board. If you decide to use it, you may find useful &lt;a href="https://github.com:yourname/aws-lambda-githubbot/blob/master/add_old_issues_to_project.py"&gt;this&lt;/a&gt; script which does it automatically for you.&lt;/p&gt;
&lt;p&gt;We’ve been using this bot for over a month now and so far it really made our work with Github projects visible and more manageable.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Master Linux CLI (part I). Useful Linux commands.</title>
    <link href="https://artemstar.com/2017/08/03/useful-linux-commands/"/>
    <id>https://artemstar.com/2017/08/03/useful-linux-commands/</id>
    <updated>2017-08-03T00:00:00Z</updated>
    <published>2017-08-03T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Interestingly, when you start reading books about Linux or going through different tutorials, they often don’t tell you about some of the cool commands that make your work sometimes so much easier. Maybe they are hiding them from you to make sure you still have to learn something in the future? &lt;code class="highlighter-rouge"&gt;¯\_(ツ)_/¯&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Anyway, today I decided to make a quick overview of some of the commands which I find useful, but which are sometimes hard to find out about. &lt;!--break--&gt;&lt;/p&gt;
&lt;h3 id="tee"&gt;tee&lt;/h3&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;tee&lt;/code&gt; command allows you to write to the stdout and a file (or files) at the same time.&lt;/p&gt;
&lt;p&gt;This is useful when you want to store and view the output of any command.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/linux/tee11.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;And you can also use it to save the output to multiple files.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;This command is incredibly useful when you want to store the output of a command to a file but also redirect it as an input to another command.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/linux/tee33.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;As you can see, we are able to take snapshots of the data as it flows through the pipes.&lt;/p&gt;
&lt;h3 id="pbcopy-mac-or-xclip-linux"&gt;pbcopy (Mac) or xclip (Linux)&lt;/h3&gt;
&lt;p&gt;This allows you to copy a file’s content to the clipboard.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/linux/pbcopy.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Now if you try to paste, you’ll get &lt;code class="highlighter-rouge"&gt;cucaracha&lt;/code&gt;, which by the way means a cockroach in Spanish :)&lt;/p&gt;
&lt;p&gt;This command makes copying from the terminal a breeze. I find it especially useful when I need to copy SSH or GPG keys.&lt;/p&gt;
&lt;h3 id="watch"&gt;watch&lt;/h3&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;watch&lt;/code&gt; runs a specified command repeatedly at regular intervals and displays its output on a console.&lt;/p&gt;
&lt;p&gt;This is used when you need to continuously monitor some command’s output.&lt;/p&gt;
&lt;p&gt;Simple examples include monitoring who is logged in to the system with &lt;code class="highlighter-rouge"&gt;watch who&lt;/code&gt; command or watching for changes inside a directory with &lt;code class="highlighter-rouge"&gt;watch ls&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We should note the &lt;code class="highlighter-rouge"&gt;-d&lt;/code&gt; option that allows you to highlight the changes that happen in the command’s output.&lt;/p&gt;
&lt;p&gt;Just to show you how it works, we’ll use it with a &lt;code class="highlighter-rouge"&gt;date&lt;/code&gt; command. &lt;script type="text/javascript" src="https://asciinema.org/a/wP3Hd9maxP6tgTYNKOxqYvIiM.js" id="asciicast-wP3Hd9maxP6tgTYNKOxqYvIiM" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;h3 id="script--scriptreplay"&gt;script &amp;amp; scriptreplay&lt;/h3&gt;
&lt;p&gt;These are 2 awesome commands which allow you to record and replay a shell session for you.&lt;/p&gt;
&lt;p&gt;To use a &lt;code class="highlighter-rouge"&gt;script&lt;/code&gt; command, we can just type &lt;code class="highlighter-rouge"&gt;script&lt;/code&gt; in which case the session will be stored in a default file named &lt;code class="highlighter-rouge"&gt;typescript&lt;/code&gt;. We can also specify the name of the file in which we want to store our session as the first argument to the &lt;code class="highlighter-rouge"&gt;script&lt;/code&gt; command. &lt;script type="text/javascript" src="https://asciinema.org/a/HVp4CIb8zU8JolXvzVY7dpyQk.js" id="asciicast-HVp4CIb8zU8JolXvzVY7dpyQk" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;An alternative to the &lt;code class="highlighter-rouge"&gt;script&lt;/code&gt; command is &lt;code class="highlighter-rouge"&gt;history&lt;/code&gt;, but it only keeps track of the commands you use and not their outputs.&lt;/p&gt;
&lt;p&gt;Another cool thing about the script command is that the shell session that you have recorded can then be replayed in your terminal with a &lt;code class="highlighter-rouge"&gt;scriptreplay&lt;/code&gt; command. This is particularly helpful if during the session you start interacting with some programs like &lt;code class="highlighter-rouge"&gt;htop&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To be able to replay a recorded shell session, we need to specify a filename for storing the timing information.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ script --timing=time.txt myshell.log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then after we’re done recording, we can use the &lt;code class="highlighter-rouge"&gt;scriptreplay&lt;/code&gt;command to replay the session.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ scriptreplay --timing=time.txt myshell.info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;script type="text/javascript" src="https://asciinema.org/a/vurga2WE8SpfRbjcXDfNKKONn.js" id="asciicast-vurga2WE8SpfRbjcXDfNKKONn" async=""&gt;&lt;/script&gt;&lt;p&gt;It’s also important to note the &lt;code class="highlighter-rouge"&gt;-c&lt;/code&gt; option to this command which allows to record the output of a single command. For example, this might come in handy when we need to record a command’s output in our bash script. The syntax goes like this:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ script -c 'ping -c 3 google.com' myshell3.log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;script type="text/javascript" src="https://asciinema.org/a/LBH7GXiAj7QF6PXCS9qAIyX5P.js" id="asciicast-LBH7GXiAj7QF6PXCS9qAIyX5P" async=""&gt;&lt;/script&gt;&lt;h3 id="jq"&gt;jq&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://stedolan.github.io/jq/"&gt;jq&lt;/a&gt; is a handy JSON processor that allows you to extract necessary fields from a JSON object.&lt;/p&gt;
&lt;p&gt;Let’s take a simple JSON file and extract some specific fields.&lt;/p&gt;
&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"colors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"black"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hue"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"primary"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"rgba"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"hex"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#000"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;img src="/public/img/linux/jq.png" alt="200x200"&gt;&lt;/p&gt;
&lt;h3 id="env"&gt;env&lt;/h3&gt;
&lt;p&gt;If being run without any arguments, this command will show you a list of current environment variables.&lt;/p&gt;
&lt;p&gt;It also allows you to run commands with specific environment variables without actually changing your environment.&lt;/p&gt;
&lt;p&gt;This can be helpful when you need to run a one time command that requires some specific env variable, but you don’t really want to change your environment. For example, to build a GO binary for Ubuntu on my Macbook I would run a command like this.&lt;/p&gt;
&lt;div class="language-yaml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ env GOOS=linux GOARCH=amd64 go build src/hello-world.go&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This runs the &lt;code class="highlighter-rouge"&gt;go build&lt;/code&gt; command with two additional environment variables. If I run &lt;code class="highlighter-rouge"&gt;env&lt;/code&gt; command right after that, I won’t find those variables in my environment.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Hopefully, you’ve found this post useful. And if you have some interesting commands to share with me, please leave a comment below!&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>What are Docker OS images and why would I want to use them in my Dockerfile?</title>
    <link href="https://artemstar.com/2017/07/23/docker-os-images/"/>
    <id>https://artemstar.com/2017/07/23/docker-os-images/</id>
    <updated>2017-07-23T00:00:00Z</updated>
    <published>2017-07-23T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Docker is not new these days. Everybody knows how to run a docker container at least on a local machine, because it’s so easy. You find an image on DockerHub, you run &lt;code class="highlighter-rouge"&gt;docker run -d &amp;lt;image-name&amp;gt;&lt;/code&gt; and that’s it.&lt;/p&gt;
&lt;p&gt;You probably also know how to build docker images, because all you need is create a &lt;code class="highlighter-rouge"&gt;Dockerfile&lt;/code&gt;, use 7-8 &lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;commands&lt;/a&gt; to describe how to package your application and all its dependencies, finally build the container image with &lt;code class="highlighter-rouge"&gt;docker build&lt;/code&gt; command and run it.&lt;/p&gt;
&lt;p&gt;Let’s look at this simple Dockerfile.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;FROM ubuntu:14.04&lt;/span&gt;
&lt;span class="s"&gt;COPY ./hello-world .&lt;/span&gt;
&lt;span class="s"&gt;EXPOSE 8080&lt;/span&gt;
&lt;span class="s"&gt;CMD [ "./hello-world" ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;It looks simple, right? Nothing special.&lt;/p&gt;
&lt;p&gt;You specify the OS image, add you binary file, … wait, it just struck me. Look again, at the Dockerfile. &lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;What in the world the ubuntu OS does here? If a container needs an operating system, then how is it different from a virtual machine?&lt;/p&gt;
&lt;p&gt;A virtual machine has an operating system and runs our applications inside. So how different is a docker container then? Is it some type of virtual machine?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Was all that talk about &lt;code class="highlighter-rouge"&gt;a container represents a bundle of your application and its dependencies&lt;/code&gt; just a big lie? (☉_☉)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To really understand the difference between the VMs and containers I offer you to look at a bit of history. I want to specifically look at how these technologies came into play and what problems they were meant to solve.&lt;/p&gt;
&lt;p&gt;Let’s first start by looking at how things were before people started using VMs. The traditional server model that was used for decades in IT looked like this.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/oldmodel.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Here we have some vendor hardware and an operating system which runs on top of it. The operating system is bound to the hardware by the drivers which are specific to that hardware. Finally, on top of the OS we run various types of applications and services.&lt;/p&gt;
&lt;p&gt;This model has lots of limitations.&lt;/p&gt;
&lt;p&gt;First, it leads to lots of underutilized hardware. In most cases, the applications and services that you run on your server don’t utilize even half of its computing resources. In fact, previous studies showed that the average utilization of the server’s resources was about 5-10%. Which means that if you spend 10 thousand dollars on a server hardware, you waste 9 thousand dollars worth of computing power. Plus, you have to spend a lot of money on the servers maintenance.&lt;/p&gt;
&lt;p&gt;But why do we don’t utilize all of our computing resources? Why can’t we just run more applications on our servers? (・ω・)b&lt;/p&gt;
&lt;p&gt;Well, it’s not that simple as it might seem. There are some challenges that we face at OS level which include running multiple instances of an application or running different versions of the same application at the same time.&lt;/p&gt;
&lt;p&gt;Most of the applications you see have some dependencies that need to be provided in order for them to run. And the problem we often see is that two applications may depend on the same library but require different versions of that library. For example, this might be the case when we want to run different versions of the same application. Installing two versions of the same library system-wide and telling each of our applications which one they need to use would be a mess and in most cases is not even possible.&lt;/p&gt;
&lt;p&gt;We can see how this problem is addressed in some programming languages. For instance, in python, you use virtual environments to create a separate isolated python runtime environment with all the dependencies specific to each application. This way we can run different python applications without breaking each applications’ dependencies.&lt;/p&gt;
&lt;p&gt;Although, the problem with application’s dependencies is solved in some programming languages, it’s not solved for all applications that we use. We need to have some technology which would allow us to provide isolation of runtime environments for any application.&lt;/p&gt;
&lt;p&gt;Besides, what if we decide to increase our computing resource utilization by running multiple instances of the same application on one server?&lt;/p&gt;
&lt;p&gt;We find many problems here as well. For example, running multiple instances of nginx would require changing its init script, as well as its configs because we can’t bind more than one application to the same port. As you can see, it takes quite a bit of work.&lt;/p&gt;
&lt;p&gt;I think you get the idea that we have quite a few things to think about at the OS level of our server model. What do we do? 🤔&lt;/p&gt;
&lt;p&gt;A bunch of people came up with this idea that if we can’t run multiple applications in one OS, let’s just figure out a way to run more OSes on our physical servers. Thus, the hardware virtualization came into play.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/hard-virt.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;The idea was that a special piece of software called Hypervisor abstracts the underlying hardware (virtualizes it) to provide the same hardware to several operating systems. Operating systems which run on top of Hypervisor talk to the virtual hardware presented by the Hypervisor using the generic drivers. Thus, we could have multiple OSes of different sorts running on the same server at the same time each using a piece of computing resources that are there.&lt;/p&gt;
&lt;p&gt;Thanks to hardware virtualization, the OS became portable. We’d seen before how the operating system was bound to the vendor hardware. But now operating systems can’t see the server’s underlying hardware, instead they see the generic drivers and the generic hardware provided by the Hypervisor. This fact makes the process of distributing and running various OSes on any hardware very easy. All we need is a proper hypervisor running on that hardware or host OS.&lt;/p&gt;
&lt;p&gt;Hardware virtualization allowed us to run in isolation multiple OSes on the same physical server and made the movement of an OS form one machine to another much easier.&lt;/p&gt;
&lt;p&gt;Did it solve our problem with computing resource utilization? It sort of did. We can now run more instances our applications on a physical server by running more instances of operating systems.&lt;/p&gt;
&lt;p&gt;But it doesn’t really solve our initial problem with running multiple applications on the same OS. It seems more like a workaround.&lt;/p&gt;
&lt;p&gt;What we really need is a technology that would allow us to run our applications in isolation on one OS.&lt;/p&gt;
&lt;p&gt;And this is where containers come in.&lt;/p&gt;
&lt;p&gt;Containers are an abstraction at the OS layer. With the help of containers, we can bundle the application code and all its dependencies into one standardized package and then run it in isolation from other system processes.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/container.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Isolation means everything here. For instance, filesystem isolation means that each container gets its own filesystem which solves the problem with version conflicts. Network isolation means each instance gets its own IP address and thus two containerized applications are free to bind to the same port, without us having to deal with any init scripts.&lt;/p&gt;
&lt;p&gt;This really solves our problem with resource utilization. We can now run multiple instances of applications (even of the same type) in our OS and use our computing resources to the maximum.&lt;/p&gt;
&lt;p&gt;Since container technology is based on the features built into the Linux kernel, like cgroups and namespaces, we can only run containers on Linux distributions. Although, with the use of virtual machines, we can run containers on pretty much any OS (check how Docker works on Windows or OS X).&lt;/p&gt;
&lt;p&gt;The Docker itself is a software container platform which allows us to create and manage containers.&lt;/p&gt;
&lt;p&gt;Docker provides a standard format for creating container images. This makes it extremely easy to run and distribute our applications through various OSes. For instance, we can now build a container image with our application on CentOS and then run that same image on Ubuntu without having to change anything, because differences in OS distributions are abstracted away. The only thing we need is to have Docker installed.&lt;/p&gt;
&lt;p&gt;But I want to note that having containers doesn’t mean we don’t VMs. They are still widely used and help us decrease hardware costs. They are often used in cases when we need to provide computing resources to different people. Virtual machines allows us to allocate the exact number of computing resources to meet the client’s requirements, and if there’s some resources left we can allocate it to another client. This is how public clouds work.&lt;/p&gt;
&lt;p&gt;In fact, VMs and container nicely work together (＾＾)ｂ&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/container-vm.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;As for using VMs to provide isolation for our applications, VMs seem a bit too heavy. They are big in size (we’re usually talking about gigabytes here) because they run the full OS which includes things like the kernel and hardware drivers.&lt;/p&gt;
&lt;p&gt;On the other hand, containers don’t need all that software the OS usually have. All they need is an application you want to run plus its dependencies. As a result, containers are much smaller in size (MBs against GBs) which means they’re easier to build and distribute. They are also start and stop quicker because there’s no OS boot process required.&lt;/p&gt;
&lt;p&gt;So containers are obviously a better choice for running applications in isolation. Although, VMs are still used for running and distributing applications when a higher level of security and isolation is required.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This is all great, but I still don’t get it why we have the ubuntu image in the Dockerifle? If a container provides isolation to the application and its dependencies, and uses the host operating system’s kernel, then why do we need to specify an OS image in our Dockerfile&lt;/strong&gt; (◔_◔)???&lt;/p&gt;
&lt;p&gt;Let’s make things even more confusing before we answer that question.&lt;/p&gt;
&lt;p&gt;If I build the image from the Dockerfile above and look at its size, here’s what I will see.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/im-size.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Ubuntu image is only 188MB. 🤔&lt;/p&gt;
&lt;p&gt;Have you seen ubuntu OS image of that size? If you try to search for ubuntu 14.04, you’ll see that its size is about 1GB.&lt;/p&gt;
&lt;p&gt;So why is ubuntu container is 5 times smaller than a normal ubuntu image?&lt;/p&gt;
&lt;p&gt;As we talked earlier, containers don’t need an OS as they share the kernel and execute instructions on the host directly. So that ubuntu container images is stripped down image of a real operating system which doesn’t include kernel, but does have some libs and utilities specific to Ubuntu distro.&lt;/p&gt;
&lt;p&gt;This means that we actually don’t need ubuntu container image to run my GO binary. If I try to build another container image (v2.0) for my test application from &lt;code class="highlighter-rouge"&gt;scratch&lt;/code&gt;, it should still work.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;FROM scratch&lt;/span&gt;
&lt;span class="s"&gt;COPY ./hello-world .&lt;/span&gt;
&lt;span class="s"&gt;EXPOSE 8080&lt;/span&gt;
&lt;span class="s"&gt;CMD [ "./hello-world" ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;scratch&lt;/code&gt; is a reserved docker image name which just means no-op.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/docker/scratch.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Noticed how it skipped the first step and went straight to the next?&lt;/p&gt;
&lt;p&gt;Now if I look at the image size of my new image, I’ll see that it became significantly smaller - only 5 MB against 194 MB! And it still works as before. &lt;script type="text/javascript" src="https://asciinema.org/a/V5jj9QPoERF8YAEKcVCY5JRWH.js" id="asciicast-V5jj9QPoERF8YAEKcVCY5JRWH" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;But why people still use OS container images anyway? One of the most common reasons for this is a package manager. When you write a Dockerfile to build your image, it’s much easier to install required dependencies inside the container using a package manager in the RUN command than directly copying all the packages from your local machine to the container.&lt;/p&gt;
&lt;p&gt;Another reason to use an OS image is it facilitates troubleshooting. For instance, OS image may have a shell installed, this way I can run a virtual terminal inside the container and then issue commands from inside.&lt;/p&gt;
&lt;p&gt;Let’s see if can I check internet connectivity from inside the containers that I’ve built. &lt;script type="text/javascript" src="https://asciinema.org/a/5Li4vveWKiL3IjB84415OGh0x.js" id="asciicast-5Li4vveWKiL3IjB84415OGh0x" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;As you saw, I wasn’t able to run a shell inside the version 2 of my container which was built from scratch. But I could run shell and ping inside the version 1 of my container, because the parent container (ubuntu) has all the required binaries in it.&lt;/p&gt;
&lt;p&gt;So using OS images when building your application container images is a common practice. But you should remember to choose the OS container image wisely to avoid bloating your application container image.&lt;/p&gt;
&lt;p&gt;For further reading, I recommend checking out the &lt;a href="https://docs.docker.com/samples/alpine/"&gt;Alpine&lt;/a&gt; Docker image which is one of the most popular these days. It’s very lightweight (4-5 MB), it has a package manager and many useful utilities preinstalled.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Iterm2 + Tmux = Awesome</title>
    <link href="https://artemstar.com/2017/05/28/tmux-iterm2/"/>
    <id>https://artemstar.com/2017/05/28/tmux-iterm2/</id>
    <updated>2017-05-28T00:00:00Z</updated>
    <published>2017-05-28T00:00:00Z</published>
    <content type="html">
&lt;p&gt;If you regularly work with the terminal, especially if your work with remote servers via SSH, you must know about Tmux. It is an awesome tool that makes your work so much easier!&lt;/p&gt;
&lt;p&gt;What is Tmux? Tmux stands for a terminal multiplexer. It basically allows you to open multiple terminal sessions inside a single terminal window or even remote terminal session (like when you SSH into a server). This may not seem like very cool to you now, but let’s look at some examples of how it works and I’m sure you’ll love it :) &lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;First, we need to install and configure Tmux. As I also wanted to show you how awesome Tmux integrates with &lt;a href="https://www.iterm2.com/"&gt;Iterm2&lt;/a&gt;, I assume you have it ╭( ･ㅂ･)و&lt;/p&gt;
&lt;p&gt;To install Tmux just run:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ brew install tmux&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then type &lt;code class="highlighter-rouge"&gt;tmux&lt;/code&gt; in your terminal, hit enter and start using it ;)&lt;/p&gt;
&lt;p&gt;Well, you’ll probably get lost if you use it for the first time as it takes a few things to learn to get a grasp of it. So follow along and I’ll show you how things work.&lt;/p&gt;
&lt;p&gt;As we already mentioned, Tmux allows you to open multiple terminals inside a single terminal window. But it also supports splitting panes which allows you to open multiple terminals inside a single terminal window so it looks something like this:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Tmux is developed as a client-server model which brings the concept of sessions into play. You create those new terminal windows within a session. So the first thing we need to know is how to create a session:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ tmux&lt;/span&gt; &lt;span class="c1"&gt;# creates a new session (session number is assigned as a session's name)&lt;/span&gt;
&lt;span class="s"&gt;$ tmux new -s &amp;lt;session-name&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# creates a new session and assigns it a name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;To list the sessions you can run:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ tmux ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Creating a session automatically creates a new terminal window within that session. You’ll see how your current terminal window is switched to a new terminal.&lt;br&gt;&lt;br&gt; &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/VYtiVx2i_ZQ" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;p&gt;You may wonder how did I &lt;em&gt;detach&lt;/em&gt; from the new session and get to my original terminal. Here is the command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;Crtl + b, d&lt;/span&gt; &lt;span class="c1"&gt;# don't type the comma, it is here to separate the prefix and the command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Here &lt;code class="highlighter-rouge"&gt;Crlt + b&lt;/code&gt; is the so called &lt;em&gt;prefix&lt;/em&gt; which basically tells the terminal that the next thing you type will be a Tmux command. You’ll have to deal with this prefix a lot if you don’t use Iterm2, as every Tmux command should be prefixed with this combination, although you can change it.&lt;/p&gt;
&lt;p&gt;To attach to an existing session you can use the following command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ tmux a -t &amp;lt;session-name&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# "a" is short for "attach" here&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Now as you got the basics, let’s look at a more advanced example with splitting panes.&lt;br&gt;&lt;br&gt; &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/HUUMwx3ZqcA" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;p&gt;“That’s all very cool, but I can do the same things with Iterm2 profiles and splitting panes” - you’ll probably say. And you’re right. I personally very rarely use Tmux locally. When Tmux becomes incredibly useful is when you work with remote servers via SSH.&lt;/p&gt;
&lt;p&gt;Does it ever happen when you need to work with some remote server regularly via SSH? Maybe you’re a web developer and develop your new application. Or maybe you’re a system administator who is testing a new service setup and configuration. In these cases, you’ll often open up many terminals: one - for a text editor with a service configuration or code, one - for tailing logs, one for launching the application or service, etc. Your work on a server can take hours and what really sucks is when you take a break and go for a lunch and you need to keep your ssh connection open, because otherwise you’ll have to open all those windows again to be able to work.&lt;/p&gt;
&lt;p&gt;Here is where Tmux comes into play. The greatest thing about Tmux session is that it can persist beyond SSH logout. So you can just shutdown your laptop, go play soccer, then come back, ssh into your remote server and open the terminal windows as they were when you logged out. Tmux basically makes your work independent of SSH connection.&lt;/p&gt;
&lt;p&gt;Let’s see an example that demonstrates that Tmux sessions indeed persist beyond SSH logout.&lt;br&gt;&lt;br&gt; &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/_qj9ZGL-MD8" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;p&gt;Here I used this cool Iterm2 and Tmux integration I was talking about.&lt;/p&gt;
&lt;p&gt;Instead of switching my terminal to a new one created by Tmux session, Iterm2 allows me to open it just like another tab in my terminal window. Moreover, I don’t have to memorize all of those Tmux commands like &lt;code class="highlighter-rouge"&gt;Crtl + b, %&lt;/code&gt; to split a pane, &lt;code class="highlighter-rouge"&gt;Crtl + b, o&lt;/code&gt; to switch a pane, but instead I can use the Iterm2 shortcuts I use every day \ (•◡•) /&lt;/p&gt;
&lt;p&gt;To use Tmux with Iterm2, you only need to provide an extra option (&lt;code class="highlighter-rouge"&gt;-CC&lt;/code&gt;) for Tmux commands:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ tmux -CC&lt;/span&gt; &lt;span class="c1"&gt;# create a new session&lt;/span&gt;
&lt;span class="s"&gt;$ tmux -CC a -t &amp;lt;session-name&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# attach to a session&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;And yeah, you can choose if you want to open Tmux session in a new tab or a new window. See Iterm2 settings:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Tmux is very lightweight and even comes preinstalled with some Linux distributions like Ubuntu. Tmux eliminates the risks of losing an SSH connection. Sometimes it’s very important like when you’re doing a manual backup of your server. In gereral, if losing SSH connection means a lot of work to you - opening a window for a text editor, for logs, for running commands - then Tmux can make your life so much easier. Let’s look at another example close to a real world use case: playing around with nginx configuration: &lt;br&gt;&lt;br&gt; &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/-pOpVSVTMPk" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;&lt;/p&gt;
&lt;p&gt;Hopefully, if this post didn’t make you fall in love with Tmux, at least it made you curious ⚆ _ ⚆&lt;/p&gt;
&lt;p&gt;In the end, I want to also mention another cool feature which Tmux provides. It allows multiple users to &lt;a href="https://www.youtube.com/watch?v=norO25P7xHg"&gt;share a terminal session&lt;/a&gt; which basically means two people can work in the same terminal at the same time. For example, this could be helpful for pair programming.&lt;/p&gt;
&lt;p&gt;Also, if you really liked the idea behind Tmux, you may want to take a look at &lt;a href="https://github.com/Tmuxinator/Tmuxinator"&gt;Tmuxinator&lt;/a&gt; which allows you to customize your work with Tmux even further.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>AWS Roles. When and how I can use them?</title>
    <link href="https://artemstar.com/2017/05/17/aws-roles/"/>
    <id>https://artemstar.com/2017/05/17/aws-roles/</id>
    <updated>2017-05-17T00:00:00Z</updated>
    <published>2017-05-17T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Unless you work via &lt;a href="http://searchaws.techtarget.com/definition/AWS-Management-Console"&gt;AWS Management Console&lt;/a&gt;, in order to access AWS resources you talk to AWS API. All API requests that you make need to be signed by &lt;a href="http://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys"&gt;secret access keys&lt;/a&gt; (access key ID and secret access key), so that AWS could identify who is making the request and prevent strangers from accessing your resources.&lt;/p&gt;
&lt;p&gt;If you ever worked with tools like &lt;a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html"&gt;AWS CLI&lt;/a&gt; and &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; which allow you to manage your AWS infrastructure, then you know that they require AWS credentials in order to work, because they work by making API requests to AWS API. When you work with these tools, you usually put your credentials to a special &lt;code class="highlighter-rouge"&gt;.aws&lt;/code&gt; folder under your user, and they simply take it from there every time you use them.&lt;/p&gt;
&lt;p&gt;But what do you do when you need to provide access to AWS resources to some scripts or an application running on your EC2 instance? For example, your application running on an EC2 instance might need to save or fetch some files from an S3 bucket. Do you just put user credentials on that instance in this case? Although this will work, there’s a better way to do this… &lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;For cases like this, when one AWS service needs to access another, AWS offers a special IAM entity that is called a &lt;em&gt;role&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Roles are similar in concept to users. It’s an identity to which you attach policies with permissions to access AWS resources. The difference is that, unlike a user, role is not associated with a single person, instead it is meant to be assumable by any person or any service that needs it.&lt;/p&gt;
&lt;p&gt;What roles bring to the table:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic credentials.&lt;/strong&gt; Unlike users, roles don’t have any credentials associated with them. Instead, credentials are created dynamically and provided to a user or a service that assumes the role. This saves us some trouble of having to distribute credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporary credentials.&lt;/strong&gt; Each set of credentials that you use requires a rotation. Great thing about roles is that credentials that come with it are rotated regularly for you. If you chose to put user credentials on your machines, the rotation would mean creating a new user account and running configuration management tools to update credentials on all of your servers. This would mean extra work.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To sum it up, EC2 instances that have an IAM role attached automatically have AWS security credentials available which are also regularly rotated for you. Thus, your application or any script that you run on that instance can use those credentials to access AWS resources via API. And you, the person who manages the AWS infrastructure, are saved quite a bit of time and trouble related to security credentials rotation and distribution. Sounds cool? Let’s see how it works.&lt;/p&gt;
&lt;h3 id="example"&gt;Example&lt;/h3&gt;
&lt;p&gt;We will first create an IAM role and attach a policy that gives access to an S3 bucket. Then we launch an EC2 instance with this role attached, ssh into the instance, and do some tests like uploading and downloading files from S3.&lt;/p&gt;
&lt;p&gt;When creating a role via Amazon Management Console, it does some steps for you automatically, that’s why to show you how this process really goes I will use AWS CLI.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; We’ll start by creating a file with our &lt;em&gt;trust policy&lt;/em&gt;. This is one of the two types of policies we’ll need to create. A trust policy simply describes who can assume this role.&lt;/p&gt;
&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ec2.amazonaws.com"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;In the &lt;a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal"&gt;principle&lt;/a&gt; element we specify a user, AWS account, AWS service, or other principal entity that is allowed or denied access to a resource. In this trust policy, we say that we want to allow EC2 service (our instances) to assume the role which in turn means retrieving temporary credentials from &lt;a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html"&gt;AWS STS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; After we’ve created a file with our trust policy, we’ll create the role itself.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ aws iam create-role --role-name logsbucket-role --assume-role-policy-document file://trust-policy.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I provide the path to a file with the trust policy after the &lt;code class="highlighter-rouge"&gt;file://&lt;/code&gt; prefix.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Now that we’ve created a role and defined who can assume it (the principle), we need to add a set of permissions to our role. We’ll create a permissions policy which defines what actions and resources the principal is allowed to use.&lt;/p&gt;
&lt;p&gt;First, I’ll create a file (&lt;code class="highlighter-rouge"&gt;logs-bucket-permissions.json&lt;/code&gt;) with my access policy:&lt;/p&gt;
&lt;div class="language-json highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::artem-server-logs/*"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Note, that each resource in AWS is uniquely identified by the Amazon Resource Name (&lt;a href="http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html"&gt;ARN&lt;/a&gt;), so we need to specify ARNs of those resources we want to make accessible through this role.&lt;/p&gt;
&lt;p&gt;In this policy, we allow to get objects from and put object to my test bucket &lt;code class="highlighter-rouge"&gt;artem-server-logs&lt;/code&gt; which I created beforehand.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt; Then we have two options: either to embed this policy in our role making it inline policy (&lt;a href="http://docs.aws.amazon.com/cli/latest/reference/iam/put-role-policy.html"&gt;put-role-policy&lt;/a&gt; command) or create a standalone (managed) policy and then attach it to the role (&lt;a href="http://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html"&gt;attach-role-policy&lt;/a&gt; command). The difference between managed and inline policies is explained in full detail &lt;a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this example, we’ll use inline policy. This way it will be deleted along with the role when I delete the role.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ aws iam put-role-policy --role-name logsbucket-role --policy-name LogsBucketPermissions  --policy-document file://logs-bucket-permissions.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;5.&lt;/strong&gt; We’ve created our role and given it permissions, but there is still one thing we need to do before we can use it with EC2 instances. The problem is that we cannot directly attach the role to an EC2 instance, because the application that is running on it is abstracted from AWS by the virtualized operating system (read more on this &lt;a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html"&gt;here&lt;/a&gt;). To assign a role to an instance and make it available for the application or script that is running on it, we need to put the role in a container that is called &lt;em&gt;instance profile&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I took a picture from this &lt;a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html"&gt;post&lt;/a&gt; where they did a good job explaining how roles work with EC2 instance.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/aws/role-work.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;This picture shows you that &lt;em&gt;instance profile&lt;/em&gt; is just a container that you attach to an EC2 instance and that holds your role. An application running on the instance has access to the role through this container and uses the role to retrieve temporary credentials for signing API requests.&lt;/p&gt;
&lt;p&gt;When you create a role via Amazon Management Console, the instance profile is created for you automatically. It has the same name as the role. But it doesn’t work this way with AWS CLI, so we need to create a container ourselves:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ aws iam create-instance-profile --instance-profile-name logsbucket-profile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and then put our role in this container:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ aws iam add-role-to-instance-profile --instance-profile-name logsbucket-profile --role-name logsbucket-role&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;You can see what role the instance-profile contains by running:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ aws iam get-instance-profile --instance-profile-name logsbucket-profile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Things to note: instance profile can contain only one role, but a single role can be included in multiple instance profiles.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;6.&lt;/strong&gt; We’re done with the role now. The only thing left is to lauch an EC2 instance with our role and test it :)&lt;/p&gt;
&lt;p&gt;I will launch Ubuntu 16.04 instance via Managements Console since AWS CLI command seems to be too long to paste it here. In the third step of the launch wizard, you can specify which role to attach to the instance.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;As you can see from the picture, instead of roles you actually choose an &lt;em&gt;instance profile&lt;/em&gt; that holds your role.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;7.&lt;/strong&gt; After instance has been launched, I ssh into it and check if I can retrieve temporary security credentials that come with the role.&lt;/p&gt;
&lt;p&gt;The security credentials are retrieved from the &lt;a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html"&gt;instance metadata&lt;/a&gt;. The instance metadata is basically all the data about the instance that is accessible from within the instance itself. So if an application needs to know some information about the instance it’s running on, including temporary security credentials which are available to the instance, it needs to send an HTTP GET requests to the following URL:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ curl http://169.254.169.254/latest/meta-data/&amp;lt;metadata-category&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Thus, if I want to test whether the security credentials are available to me from within the instance, I can try to get them from the instance metadata:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/logsbucket-role&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Most of the times you will use AWS SDK for making AWS API requests. In this case, you don’t have to make requests to the instance metadata to retrieve AWS credentials, AWS SDK does this for you.&lt;/p&gt;
&lt;p&gt;AWS CLI is a command line tool built on top of the AWS SDK for Python. It’s used often to manage AWS resources via API. We’ll use it for our tests.&lt;/p&gt;
&lt;p&gt;I’ve just tested if I could retrieve security credentials from metadata, now I want to check if I can actually get files from and save files to my test S3 bucket.&lt;/p&gt;
&lt;p&gt;I will install AWS CLI on my ubuntu instance:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ sudo apt-get -y update&lt;/span&gt;
&lt;span class="s"&gt;$ sudo apt-get -y install awscli&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then I do my testing. I create a file, upload it to my test bucket, delete my local copy and download it again from the bucket.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;It works as expected, but you might notice that I also had to specify the region of my bucket through the environment variable.&lt;/p&gt;
&lt;h3 id="conclusions"&gt;Conclusions&lt;/h3&gt;
&lt;p&gt;In this post we’ve seen how AWS Roles work and what benefits we get from using them.&lt;/p&gt;
&lt;p&gt;AWS Roles are used all the time in many different cases.We tried uploading a test text file to an S3 bucket and downloading it from it, but the same way we could archive and upload system and application logs to a remote storage. The application that you run on your EC2 instance might need to access DynamoDB (NoSQL database) which is another AWS service and we see the same problem with credentials management which we now know is easily solved by using roles. Thus, it’s really important to understand how AWS roles work, so that we’re not afraid of using them :)&lt;/p&gt;
&lt;p&gt;Here is another interesting &lt;a href="https://www.youtube.com/watch?v=C4AyfV3Z3xs"&gt;link&lt;/a&gt; to the video that shows you how AWS roles could be used with SDK-based applications.&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Master Git (part V). Change commits in your history. Interactive rebase.</title>
    <link href="https://artemstar.com/2017/05/07/rebase-interactive/"/>
    <id>https://artemstar.com/2017/05/07/rebase-interactive/</id>
    <updated>2017-05-07T00:00:00Z</updated>
    <published>2017-05-07T00:00:00Z</published>
    <content type="html">
&lt;p&gt;When you work with with version control, you sometimes find yourself in situations when you would like to edit or delete the saved changes in your project’s history. Well, in this case, you should know that rewriting your project’s history could be &lt;em&gt;dangerous&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Unless you work on a project alone, you will have to share your changes to the project (commits) with other people. In most cases, you share commits via a central server like Bitbucket or Github other than directly with other developers. I’m sure you already know that when we make a new commit it’s based on some other commit(s) which is considered to be the new commit’s parent(s). So once you shared your commits, they will constitute the basis for the work of other people. Rewriting or deleting commits that you already shared with others would mean a whole lot of trouble for those who based their work on those commits. So unless you want your colleagues to hate you, follow this simple rule:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;Never change commits which you already shared with others.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If you want to undo the changes in your project’s history, just create a new commit that corrects or completely removes the changes introduced by previous commits. There is the &lt;a href="/2017/03/27/git/"&gt;revert&lt;/a&gt; command which allows you to do just that.&lt;/p&gt;
&lt;p&gt;If you understood that rewriting your public history is bad, then you’re safe to keep reading :) Today I will show you how changing commits could be useful. &lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;I already talked about commands like &lt;a href="/2017/03/27/git"&gt;git commit –amend&lt;/a&gt; which helps you to fix up the last commit and &lt;a href="/2017/03/27/git/"&gt;git reset&lt;/a&gt; that allows you to restore to the previous commit removing all the commits following it.&lt;/p&gt;
&lt;p&gt;Again, those are very useful, but should be used on local commits only.&lt;/p&gt;
&lt;p&gt;Today we will talk about a more powerful command, that is &lt;strong&gt;&lt;code class="highlighter-rouge"&gt;git rebase -i&lt;/code&gt;&lt;/strong&gt;, which gives you full control on how you want to reshape your project’s history.&lt;/p&gt;
&lt;h3 id="rebase"&gt;Rebase&lt;/h3&gt;
&lt;p&gt;Remember what rebasing is? It simply allows you to change the base of your current branch. Basically this means taking the changes introduced by commits and creating new commits with the same changes but with a different base.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;code class="highlighter-rouge"&gt;-i&lt;/code&gt; option starts interactive mode which gives you full control on how this process goes.&lt;/p&gt;
&lt;h3 id="case-1-two-different-branches"&gt;Case 1: Two different branches&lt;/h3&gt;
&lt;p&gt;So let’s imagine we have the same situation as depicted on the picture&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;and from the feature branch let’s run interactive rebase.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git checkout feature&lt;/span&gt;
&lt;span class="s"&gt;$ git rebase -i master&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This will open up a text editor and offer you different options on how to create new commits. In the comments, described all the things we can do.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;As we can see, we can edit existing commits (&lt;code class="highlighter-rouge"&gt;edit&lt;/code&gt;) including their messages (&lt;code class="highlighter-rouge"&gt;reword&lt;/code&gt;), merge several commits into one (&lt;code class="highlighter-rouge"&gt;squash&lt;/code&gt;, &lt;code class="highlighter-rouge"&gt;fixup&lt;/code&gt;), run commands as we create new commits (&lt;code class="highlighter-rouge"&gt;exec&lt;/code&gt;), and remove commits (&lt;code class="highlighter-rouge"&gt;drop&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Let’s imagine that my two commits added new functions which are closely related and work in couple. In this case, I may want to squash the second commit into the first and leave in project’s history a single commit instead of two. I will use the &lt;code class="highlighter-rouge"&gt;squash&lt;/code&gt; option which, unlike &lt;code class="highlighter-rouge"&gt;fixup&lt;/code&gt;, allows me to change the message for the resulting commit.&lt;/p&gt;
&lt;p&gt;So I change &lt;code class="highlighter-rouge"&gt;pick&lt;/code&gt; to &lt;code class="highlighter-rouge"&gt;s&lt;/code&gt;(&lt;code class="highlighter-rouge"&gt;squash&lt;/code&gt;) for the second commit.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;After saving and exiting, it will open up a text editor again, asking me to edit the commit message. It shows all the messages for the commits. If we leave it as is, all of them will be used for the the new commit’s message.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;So I’ll just edit the first one and delete the rest which in my case is just the second commit message:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;In the end, we get one new commit which applies to a new base (&lt;code class="highlighter-rouge"&gt;master&lt;/code&gt;) the same changes as the two commits we just squashed.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3 id="case-2-the-same-branch"&gt;Case 2: the same branch&lt;/h3&gt;
&lt;p&gt;Interactive rebase can be useful not only when we rebase one branch onto another. It is often used to &lt;em&gt;rewrite the history of a branch&lt;/em&gt; - the thing we talked about in the beginning when we mentioned commands like &lt;a href="/2017/03/27/git/"&gt;amend&lt;/a&gt; and &lt;a href="/2017/03/27/git/"&gt;reset&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Let’s imagine I have a local branch with a series of commits which I haven’t shared with anyone yet. And before I do share it with anyone, I would like to edit some of them, probably squash a few as I made a bunch of small commits which better look together, maybe correct some typos in the commit messages, etc.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;First, I choose the range of commits which I want to change. I will change commits that follow the commit &lt;code class="highlighter-rouge"&gt;bdc72fe&lt;/code&gt; - this will be the base for all new commits which will be created - remember &lt;em&gt;rebasing creates new commits&lt;/em&gt; at the specified base.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git rebase -i bdc72fe&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This will open up a text editor as we’ve seen before and list the commits coming after commit we specified which is &lt;code class="highlighter-rouge"&gt;bdc72fe&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;My commits regarding the timeout served the same purpose of introducing a new parameter to the function and removing a temporary workaround. So I’ll squash (&lt;code class="highlighter-rouge"&gt;s&lt;/code&gt;) all those commits in one. I also noticed that I made a typo in the word &lt;em&gt;controller&lt;/em&gt;, so I’ll reword that message (&lt;code class="highlighter-rouge"&gt;r&lt;/code&gt;). And the last commit introduced a new function which I decided to be unnecessary, so I’ll drop that commit (&lt;code class="highlighter-rouge"&gt;d&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;By changing commit messages as described earlier, I achieve the state of the project’s history I want:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3 id="how-to-undo-rebase-o_o-"&gt;How to undo rebase O_o ?&lt;/h3&gt;
&lt;p&gt;Git allows you to restore history even when you rewrite it. I have already touched on using &lt;a href="/2017/03/28/git/"&gt;reflog&lt;/a&gt; to restore changed history to the previous state in one of my posts.&lt;/p&gt;
&lt;p&gt;So I’ll simply show you how I restore my project’s history to the state before we started rebasing.&lt;/p&gt;
&lt;p&gt;We use command &lt;code class="highlighter-rouge"&gt;git reflog&lt;/code&gt; to find the commit to which HEAD pointed to before we started the rebase. It’s usually the last commit that you’ve made.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;the I’ll use &lt;a href="/2017/03/27/git/"&gt;hard reset&lt;/a&gt; in order to restore to the state when the last commit was added to my branch and before we started rebasing.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reset --hard HEAD@{17}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>How to work with multiple AWS accounts.</title>
    <link href="https://artemstar.com/2017/04/22/aws-profiles/"/>
    <id>https://artemstar.com/2017/04/22/aws-profiles/</id>
    <updated>2017-04-22T00:00:00Z</updated>
    <published>2017-04-22T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Ever had to work with multiple AWS accounts? If so, then you probably have a working solution on how to make switching accounts easier, in which case don’t hesitate to share it with me in the comments below. But if you still find troublesome managing your multiple AWS credentials, then you should find this post interesting.&lt;/p&gt;
&lt;p&gt;If you’re used to working with just one AWS account, you most probably used &lt;code class="highlighter-rouge"&gt;aws configure&lt;/code&gt; command to quickly set up your &lt;em&gt;default&lt;/em&gt; credentials like this.&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/terraform/aws-conf.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;Recently, I find myself in a situation, when I had to switch between two AWS accounts of two different companies. As it’s clearly not a rare case, the first thing I did was to look for a solution on how to manage multiple profiles that Amazon itself would suggest.&lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;And I found that Amazon allows you to create &lt;a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles"&gt;named profiles&lt;/a&gt; for each set of your credentials.&lt;/p&gt;
&lt;p&gt;So I created two profiles for each AWS account that I had to work with. I chose to name those profile according to the names of the organizations to which those user accounts belonged. &lt;script type="text/javascript" src="https://asciinema.org/a/a86oc30plnxdcfy5weinvrsex.js" id="asciicast-a86oc30plnxdcfy5weinvrsex" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;I forgot to mention that I deleted the default profile which I had previously configured. I did this so that my actions wouldn’t affect any AWS resources without me being fully aware of where I’m going to do the changes.&lt;/p&gt;
&lt;p&gt;Now, I could use AWS CLI with 2 different user accounts as long as I provided &lt;code class="highlighter-rouge"&gt;--profile &amp;lt;profile_name&amp;gt;&lt;/code&gt; option to each command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;aws ec2 describe-instances --profile ex&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;By the way, this command to describe instances gives too big of an output. So I usually use the following alias to get a short description of launched instances:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ~/.zshrc&lt;/span&gt;
&lt;span class="s"&gt;alias idesc="aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Placement.AvailabilityZone, State.Name, InstanceId,InstanceType,Tags]' --output text"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;img src="/public/img/terraform/idesc.png" alt="200x200"&gt;&lt;/p&gt;
&lt;p&gt;But having always to provide the profile option in every command can make you quickly tired, right? Good thing, Amazon allows us to use an environment variable to specify the profile we want to use:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Setting the &lt;code class="highlighter-rouge"&gt;AWS_PROFILE&lt;/code&gt; environment variable affects credential loading for all officially supported AWS SDKs and Tools (including the &lt;em&gt;AWS CLI&lt;/em&gt; and &lt;em&gt;Terraform&lt;/em&gt;).&lt;/p&gt;
&lt;p&gt;Now new questions arise. The environment variable for a profile is great, but where we define it and how we get information about which profile we’re using.&lt;/p&gt;
&lt;p&gt;At first, I looked for solutions on the internet, but I didn’t find any to my liking. So I came up with a simple bash script which would provide me with ability to quickly switch profiles, turn them off, and give me the visibility into what profile I’m currently using.&lt;/p&gt;
&lt;p&gt;Here is how my script looks like:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;if [[ $1 = 'on' ]]; then&lt;/span&gt;
  &lt;span class="s"&gt;if ! aws configure --profile $2 list &amp;amp;&amp;gt; /dev/null ; then&lt;/span&gt;
    &lt;span class="s"&gt;echo "profile \"$2\" doesn't exist"&lt;/span&gt;
  &lt;span class="s"&gt;else&lt;/span&gt;
    &lt;span class="s"&gt;if ! grep "export PS1" ~/.zshrc &amp;amp;&amp;gt; /dev/null ; then&lt;/span&gt;
      &lt;span class="s"&gt;echo "export AWS_PROFILE=$2" &amp;gt;&amp;gt; ~/.zshrc&lt;/span&gt;
      &lt;span class="s"&gt;echo "export PS1=\"($2)\$PS1\"" &amp;gt;&amp;gt; ~/.zshrc&lt;/span&gt;
    &lt;span class="s"&gt;else&lt;/span&gt;
      &lt;span class="s"&gt;sed -i -e "s/.*export PS1.*/export PS1=\"($2)\$PS1\"/" ~/.zshrc&lt;/span&gt;
      &lt;span class="s"&gt;sed -i -e "s/.*export AWS_PROFILE.*/export AWS_PROFILE=$2/" ~/.zshrc&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;
    &lt;span class="s"&gt;source ~/.zshrc&lt;/span&gt;
  &lt;span class="s"&gt;fi&lt;/span&gt;

&lt;span class="s"&gt;elif [[ $1 = 'off' ]]; then&lt;/span&gt;
  &lt;span class="s"&gt;sed -i -e '/.*export AWS_PROFILE.*/d' ~/.zshrc&lt;/span&gt;
  &lt;span class="s"&gt;sed -i -e '/.*export PS1=\(.*\).*/d' ~/.zshrc&lt;/span&gt;
  &lt;span class="s"&gt;source ~/.zshrc&lt;/span&gt;
  &lt;span class="s"&gt;unset AWS_PROFILE&lt;/span&gt;
&lt;span class="s"&gt;else&lt;/span&gt;
  &lt;span class="s"&gt;echo "Usage:"&lt;/span&gt;
  &lt;span class="s"&gt;echo "To switch to a specific profile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;awspr on profile-name"&lt;/span&gt;
  &lt;span class="s"&gt;echo "To turn this thing off&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;awspr off"&lt;/span&gt;
&lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;As you can see, I export &lt;code class="highlighter-rouge"&gt;AWS_PROFILE&lt;/code&gt; in my &lt;code class="highlighter-rouge"&gt;~/.zshrc&lt;/code&gt; file, so that when I choose to switch to a specific profile I could open up new panes in my terminal or even multiple terminal windows and still work the same AWS profile.&lt;/p&gt;
&lt;p&gt;I also change &lt;code class="highlighter-rouge"&gt;PS1&lt;/code&gt; variable which defines how my command prompt will look like. I add the name of the profile to which I switched at the very beginning of my prompt. This way I can always see what profile I’m using at this moment.&lt;/p&gt;
&lt;p&gt;I placed this script under &lt;code class="highlighter-rouge"&gt;~/bin&lt;/code&gt; folder (&lt;code class="highlighter-rouge"&gt;~/bin/awspr.sh&lt;/code&gt;) and made it executable. Another thing that I did to start using this script was to create a new alias in &lt;code class="highlighter-rouge"&gt;~/.zshrc&lt;/code&gt;:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;alias awspr=". ~/bin/awspr.sh"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This launches my script in the &lt;em&gt;current shell&lt;/em&gt; when I run &lt;code class="highlighter-rouge"&gt;awspr&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;That’s it. Now, to switch to a specific profile I run &lt;code class="highlighter-rouge"&gt;awspr on &amp;lt;profile_name&amp;gt;&lt;/code&gt;. And if I don’t work with AWS and don’t want to see a profile name in the command prompt, I can turn this thing off by running &lt;code class="highlighter-rouge"&gt;awspr off&lt;/code&gt;: &lt;script type="text/javascript" src="https://asciinema.org/a/8j9i3h3hmwb1ghfrgtqclys1v.js" id="asciicast-8j9i3h3hmwb1ghfrgtqclys1v" async=""&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;P.S. the script that I posted here could be easily customized to work with other shells and different linux distributions. My goal was to write something quickly for my personal use.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Master Git (part IV). Stash your changes</title>
    <link href="https://artemstar.com/2017/03/29/git/"/>
    <id>https://artemstar.com/2017/03/29/git/</id>
    <updated>2017-03-29T00:00:00Z</updated>
    <published>2017-03-29T00:00:00Z</published>
    <content type="html">
&lt;p&gt;Imagine a situation when you start working on some part of your project, make a bunch of uncommitted changes, but something urgent comes up that requires you to quickly make a few commits concerning another part of the project. In such cases, instead of losing the work you have already done, you can use &lt;strong&gt;&lt;code class="highlighter-rouge"&gt;git stash&lt;/code&gt;&lt;/strong&gt; command to save your uncommitted changes away for later use while switching to another task.&lt;/p&gt;
&lt;p&gt;Stashing in git is simple. All you need to do is to run&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git stash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and git will make it look like those uncommitted changes are gone and your repository is clean:&lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;And as you see, to recover stashed changes you use command&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git stash pop&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The thing you should be aware of is that git doesn’t stash untracked files:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;If we want to stash untracked file, we need to pass &lt;code class="highlighter-rouge"&gt;-u&lt;/code&gt; option:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;You can stash more than once. In this case, you probably want to add messages to your stashes with&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git stash save "message"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;br&gt; To see a list of all your stashes:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git stash list&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;br&gt; You manage your stashes by first looking at their identifier with &lt;code class="highlighter-rouge"&gt;git stash list&lt;/code&gt; command and then using commands:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git pop &amp;lt;stash_id&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# to re-apply a stash&lt;/span&gt;
&lt;span class="s"&gt;$ git drop &amp;lt;stash_id&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# to delete a stash&lt;/span&gt;
&lt;span class="s"&gt;$ git stash clear&lt;/span&gt; &lt;span class="c1"&gt;# delete all stashes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Master Git (part III). Restore undone commits.</title>
    <link href="https://artemstar.com/2017/03/28/git/"/>
    <id>https://artemstar.com/2017/03/28/git/</id>
    <updated>2017-03-28T00:00:00Z</updated>
    <published>2017-03-28T00:00:00Z</published>
    <content type="html">
&lt;p&gt;In cases when you use &lt;code class="highlighter-rouge"&gt;git reset --hard&lt;/code&gt; to undo some commits, you basically erase commits. In cases you happen to change your mind about the commits you deleted, Git still provides an easy way to restore those commits with the help of &lt;a href="https://git-scm.com/docs/git-reflog"&gt;reflog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;reflog&lt;/strong&gt; is an ordered list of commits which HEAD has ever pointed to. This is often referred to as a &lt;em&gt;safety net&lt;/em&gt; which means that you shouldn’t worry about that your data will ever be lost even if you change git history with &lt;code class="highlighter-rouge"&gt;git reset&lt;/code&gt; or a wrong &lt;code class="highlighter-rouge"&gt;rebase&lt;/code&gt;. Git reflog allows you to almost always recover your project’s history. I say “almost” because reflog doesn’t store entries forever, but only for configured period of time. &lt;!--break--&gt;&lt;/p&gt;
&lt;p&gt;To see a list of HEAD positions in you repository, you can run a command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reflog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If you want to restore your project’s history after changing it, all you need to do is to find required position of HEAD in reflog output and use command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reset --hard &amp;lt;head_position&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Suppose we made a couple of commits creating and changing a file &lt;code class="highlighter-rouge"&gt;nginx.rb&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/git/git-log-reflog.jpg" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;But we decided that we didn’t need this file in our project at all so we ran command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reset --hard 673a3b2&lt;/span&gt; &lt;span class="c1"&gt;# reset to the commit before nginx.rb was introduced&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;All the commits made after the commit we passed as an argement would be erased from the project’s history:&lt;/p&gt;
&lt;p&gt;&lt;img src="/public/img/git/git-reset-reflog.jpg" alt="400x400"&gt;&lt;/p&gt;
&lt;p&gt;If after some time we realize that we actually need this &lt;code class="highlighter-rouge"&gt;nginx.rb&lt;/code&gt; file in our project and the commits we just reset were not so bad after all, we can restore our commits very easily.&lt;/p&gt;
&lt;p&gt;First, we look into &lt;code class="highlighter-rouge"&gt;reflog&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt; We see that at this moment (&lt;code class="highlighter-rouge"&gt;HEAD@{0}&lt;/code&gt;) HEAD is pointed at &lt;code class="highlighter-rouge"&gt;673a3b2&lt;/code&gt; commit to which we made a reset. And before that (&lt;code class="highlighter-rouge"&gt;HEAD@{1}&lt;/code&gt;) HEAD was looking at &lt;code class="highlighter-rouge"&gt;c9236fe&lt;/code&gt; which was the last commit that we made before we did the reset.&lt;/p&gt;
&lt;p&gt;So we just need to tell Git to reset us again to that commit:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reset --hard c9236fe&lt;/span&gt; &lt;span class="c1"&gt;# or HEAD@{1}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Our commit history is restored and the HEAD is pointing to the commit we specified:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Master Git (part II). Viewing and undoing commits.</title>
    <link href="https://artemstar.com/2017/03/27/git/"/>
    <id>https://artemstar.com/2017/03/27/git/</id>
    <updated>2017-03-27T00:00:00Z</updated>
    <published>2017-03-27T00:00:00Z</published>
    <content type="html">
&lt;p&gt;We continue to talk about git and in this post we’ll talk about a few more things that make people confused - &lt;em&gt;viewing old versions of your files and undoing commits.&lt;/em&gt;&lt;/p&gt;
&lt;h4 id="how-to-view-the-old-version-of-my-repository"&gt;How to view the old version of my repository?&lt;/h4&gt;
&lt;p&gt;To view a previous version of your repository, you can use the following command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git checkout &amp;lt;commit&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;# you need to provide a commit hash or tag&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;!--break--&gt;&lt;p&gt;Checking out an old commit is a &lt;em&gt;read-only&lt;/em&gt; operation. So you it won’t affect the current state of your repository. Although you can create a separate branch to make your changes on the previous commit permanent.&lt;/p&gt;
&lt;p&gt;After you’re finished viewing the old version of your repository, you can move the HEAD back to the tip of your branch and load the current state of your repository:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;git checkout &amp;lt;current_branch_name&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h4 id="how-to-revert-back-to-an-old-version-of-a-file"&gt;How to revert back to an old version of a file?&lt;/h4&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;git checkout &amp;lt;commit&amp;gt; &amp;lt;file&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This turns the &lt;code class="highlighter-rouge"&gt;&amp;lt;file&amp;gt;&lt;/code&gt; that resides in the working directory into an exact copy of the one from &lt;code class="highlighter-rouge"&gt;&amp;lt;commit&amp;gt;&lt;/code&gt; and adds it to the staging area. You can then re-commit the old version as you would any other file. This basically serves as a way to revert back to an old version of an individual file.&lt;/p&gt;
&lt;h4 id="how-to-undo-a-commit"&gt;How to undo a commit?&lt;/h4&gt;
&lt;p&gt;There are tree commands that you can use.&lt;/p&gt;
&lt;p&gt;In cases when you forgot to include files in the previous commit or wish to rewrite its message, &lt;strong&gt;&lt;code class="highlighter-rouge"&gt;git commit --amend&lt;/code&gt;&lt;/strong&gt; is the command to use. This command lets you combine files in your index with the previous commit.&lt;/p&gt;
&lt;p&gt;Let’s imagine that we made a commit but forgot to include a certain file.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;To edit the commit and add the forgotten file to it without changing the message (&lt;code class="highlighter-rouge"&gt;--no-edit&lt;/code&gt;), we first stage the file and then run &lt;code class="highlighter-rouge"&gt;git commit --amend&lt;/code&gt; command:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Most of the times, I use &lt;code class="highlighter-rouge"&gt;git commit --amend&lt;/code&gt; command to simply change the message of the previous commit.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Important thing you should know before using this command is that &lt;code class="highlighter-rouge"&gt;amending&lt;/code&gt; removes the previous commit and replaces it with a new commit. So you should never use amend commits that have been pushed to a public repository. Otherwise, to other team members who have based their work on amended commit, it will look like the bases of their work disappeared from the project history.&lt;/p&gt;
&lt;p&gt;Another command to undo a commit is &lt;strong&gt;git revert&lt;/strong&gt;&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git revert &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;git revert&lt;/strong&gt; command undoes the changes introduced by the commit and creates a new commit with the resulting content. It is considered to be a “safe” way of undoing changes because this operation doesn’t change the commit history.&lt;/p&gt;
&lt;p&gt; This is the command you want to use when you need to fix a specific public (shared among others) commit in your history.&lt;/p&gt;
&lt;p&gt;Let’s say we made a commit and introduced some bad code into our application.&lt;/p&gt;
&lt;p&gt; The content of our application file now looks like this:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;So now if we want to revert the “bad” commit we run this command:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git revert c1196bd&lt;/span&gt; &lt;span class="c1"&gt;# hash of a bad commit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If we check git log now, we’ll see that a new commit was created:&lt;/p&gt;
&lt;p&gt; And this commit reverted the changes of the “bad” commit:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Because &lt;code class="highlighter-rouge"&gt;git revert&lt;/code&gt; doesn’t change the commit history, it’s used for undoing commits which were already published to the public repository.&lt;/p&gt;
&lt;p&gt;For reverting local changes there is the &lt;strong&gt;get reset&lt;/strong&gt; command. It returns your project to the state of a commit you specify while removing all the commits made after that commit.&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="c1"&gt;# undo all the commits after &amp;lt;commit&amp;gt;, but don't touch the index nor remove the changes made to the repository after the &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;$ git reset --soft &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;span class="c1"&gt;# undo all the commits after &amp;lt;commit&amp;gt;, reset the index, but don't remove the changes made to the repository after the &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;$ git reset &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;span class="c1"&gt;# Completely undo all commits and changes after the &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;$ git reset --hard &amp;lt;commit&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Git reset&lt;/em&gt; alters the history, which is why it’s considered to be dangerous and should never be used to reset commits that have already been published to the origin (which means other team members may base their work on that).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Remember &lt;code class="highlighter-rouge"&gt;revert&lt;/code&gt; is meant to undo public commits, &lt;code class="highlighter-rouge"&gt;reset&lt;/code&gt; is for local changes which were not pushed to a public repository.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you’re experimenting on something locally and made a few commits that you wish to undo &lt;code class="highlighter-rouge"&gt;git reset&lt;/code&gt; is the command to use.&lt;/p&gt;
&lt;p&gt;For example, let’s reset the 3 last commits to completely remove our experimenting with &lt;code class="highlighter-rouge"&gt;git revert command&lt;/code&gt;:&lt;/p&gt;
&lt;div class="language-yml highlighter-rouge"&gt;&lt;div class="highlight"&gt;&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ git reset --hard HEAD~3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If we now look for the file &lt;code class="highlighter-rouge"&gt;app.py&lt;/code&gt; that was added 3 commits ago (see above screenshots), we are not going to find it, because our repository was restored to the state before this file was created.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;</content>
  </entry>
</feed>