Encrypt Kubernetes data at rest

Notes/Snippets used in the video: Kubernetes stores all of its data in etcd. Apiserver is the only component which reads and writes data into etcd. So, before storing data into etcd, we can ask apiserver to encrypt the data and while it retrieve data from etcd, we can ask apiserver to decrypt the data. When we start api-server, we have to provide the encryption config with below flag:

Identity Federation

Before Identity Federation, lets talk about How are we accessing resources in Cloud Accessing resources in Cloud Note: Application needs access to consume the above resources Examples of above diagram Application running in on-prem accessing GCP resources (like gcs bucket, container registry, S3, vault, etc) Application running in other cloud accessing GCP resources Github actions trying to deploy/create resources in Azure Jenkins running in Kubernetes trying to deploy things to AWS Application running in GCP cloud function trying to access to aws resource Using Identities of that cloud Create an Identity Attach permissions Create key/secret/token for that identity Share the key with application What are the problems of using cloud identity outside that cloud ?

Rename terraform module name or resource name without recreating it

Lets say you want to rename your module after terraform is applied atleast once. Example: Lets say you have a module access module "access" { source = "..." variables = "..." } and you want to rename it to iam like module "iam" { source = "..." variables = "..." } Normal ways: By destroying and recreating Option1: Comment the module and apply (which will destroy it). And then rename and apply it (which will create it with new name) Option2: Using target destroy and apply like below terraform apply -destroy -target module.

Avoid Deadlocks in Terraform

Recently I encountered a problem with the terraform code that I have written. It created a deadlock between resources. Example Problem Solutions References Example resource "google_compute_instance_template" "instance_template" { } resource "google_compute_instance_group_manager" "igm" { version { instance_template = google_compute_instance_template.instance_template.id } } Note: resource google_compute_instance_group_manager dependson resource google_compute_instance_template Problem Whenever there is a change in config of google_compute_instance_template, terraform will delete and create it freshly (as google_compute_instance_template is immutable resource)

Github actions with Manual Approval Jobs

Lets say we use terraform and github actions for ci/cd. We will have plan and apply steps like below name: 'Terraform' on: [push, pull_request] jobs: check_lint: #.... check_vulnerability: needs: check_lint #... plan_and_apply: name: 'Plan and Apply' runs-on: ubuntu-latest needs: check_vulnerability steps: - name: Checkout uses: actions/checkout@v2 - name: Setup Terraform uses: hashicorp/setup-terraform@v1 - name: Terraform Init run: terraform init - name: Terraform Plan run: terraform plan - name: Terraform Apply run: terraform apply -auto-approve How to make it approval based ?

Access management for github at scale

In order to manage multiple users in a github organization, it is best to create multiple teams with different access and add users to the necessary teams. Example: OrgName: example Teams: # each team will have different repo access - frontend - backend - devops Members: - name: dineshba teams: devops, backend - name: another-dev teams: frontend - name: some-tech-lead teams: frontend, backend,devops Hope the above yaml explains the concept of organization, teams and team-memberships at high level

Terminal: History Matters

For more productive, it is better to reuse the commands in history. In order to do that Save more history Optimize the history Able to find and reuse history easily Save more history # bashrc/zshrc/etc #### increase history size export HISTSIZE=1000000 export HISTFILESIZE=1000000 #### Note: You can specify how big you want Optimize the history Remove the duplicates in the histroy file Keeps the last entry when there is a duplicate (to maintain the order) I wrote a simple scirpt in nim programming language for the bash_history file.

Gitfetcher

Why I will not do git fetch or git pull ? Because I have automated it. Highly inspired from bash-git-prompt. If you are already using it (or equivalent of it), below custom script may be useful to you. Problem Statement: Whenever I jump into the git directory, I do git pull And sometimes I forget to do it which might create conflicts So created a script, which will do git fetch

There are no users in Kubernetes

Where is user information stored in kubernetes ? Kubernetes will not store the user details. Then how kubernetes is able to do the Authentication ? Using CA certificates We can configure to use CA authority Create CA certificates with user name and sign it with kubernetes CA certificate will have the user data (not the kubernetes). Kubernetes will only validate the certificate. Using external systems As it is deligating this authentication to external systems, we can easily onboard users into the kubernetes.

Openrc

Problem Statement: After doing git push how many of us wanted to see the status of the pipeline new commits in github/gitlab ui new artifacts in the nexus repo wiki page of the git repo To be generic, you want to open a particular url after an action. How can we achieve it easily ? Solution: Put this openrc script file in the path (so that we can invoke this from anywhere) Eg: /usr/local/bin/openrc

CSRF and its protection

What is Cookie ? An HTTP cookie (web cookie, browser cookie) is a small piece of data that a server sends to the user’s web browser. The browser may store it and send it back with the next request to the same server. source Note: It is sent to the server back by the browser by default Cross-Site Request Forgery (CSRF) Lets say you logged into the bank website and the bank website uses cookies to store the user authentication details (say sessionId or oauth tokens)

Fix corrupted Terraform state

Lets say you corrupte your terraform state. There are multiple ways in which you can corrupte it. So I am not gonna tell how we corrupted it. In order to fix it, You have to remove unneccassary resources from the state. terraform state rm resource.resource_id #eg: terraform state rm google_container_node_pool.node_pool #eg: terraform state rm google_container_cluster.cluster You have to import the actual state into the state file (Make sure import is available for your resource) terraform import resource.

Which scheduler scheduled the scheduler in minikube ?

In oneliner, minikube is a tool that makes it easy to run Kubernetes locally. In kube-system namespace, we can see result like below, $ kubectl get pods -n kube-system -o name -l tier=control-plane pod/etcd-minikube pod/kube-apiserver-minikube pod/kube-controller-manager-minikube pod/kube-scheduler-minikube #scheduler is running, but who scheduled ? Whenever we create pod(rs/deployments/sts/ds), Scheduler is the one which schedules this pods into any of the available node (in minikube, there is only one). In minikube, kube-scheduler-minikube is the one who schedules.

Not all machines are pingable

To check if a machine(IP) is reachable, I only used ping. If ping is not working, then I will come to a conclusion that IP is not reachable. If you are like me, please read further. How Ping works? Ping sends the ICMP Echo to the given IP and waits for the ICMP Echo Reply. If we get the Echo Reply, then ICMP server in the given IP is able to send response to our Echo request.

Resolve vs Reach

I have seen lot of people confuse between resolving dns vs reach IP. So this small blog. Inorder to get data from any computer we need to know the address of the computer which is IP address. Let’s say we want to get some data from google and its IP is 172.217.31.196 (currently). Its very hard to remember this IP and also they can have multiple machines(IPs) which will give the same data (horizontal scaling).

My OSS Journey

I would love to quickly share my OSS Journey. I will keep it very short. I am writing this now (on September 2019), because I am gonna celebrate my first anniversary of my OSS journey by this October. Before one year (October 2018) Now (September 2019) No PRs to any tools/libs 31 PRS (25 Merged, 3 Open, 3 Closed) refer Know at hight level what the tool is doing Know what internally it is happening (you might get to know which function is getting called :-P) Okay to live with bugs Raise issue or fix the bug (refer) Okay to live with existing feature Trying to think what feature is helpful for me (and for others also) Not attached to any tool/lib Feeling proud about the tool/lib People from different planet can only contribute No they are just like me No tools by me Have atleast one tool (gg) Nothing special about me My resume shines No cool tshirts Have two tshirts (Hacktoberfest, TW Internal hacktoberfest) Note: You have to be logged in github to see the above pr and issue links

Theme of my website

Backbone of my website: I am using hugo to build this website. Thanks to Aswin Karthik for his blog by which I setup this website. Just write markdown files and push, and travis-ci will publish your website. So cool !!!. There are lot of themes available here. After some search, I landed to current theme which is hyde. I really liked the theme and it saved lot of my time. This theme gives lot of options to customize our website.

How I am using fzf for every 5 min of my programming life

Installation for new users $ brew install fzf $ $(brew --prefix)/opt/fzf/install # useful key bindings and fuzzy completion Note: Its better to use key bindings and fuzzy completion Default options that I use export FZF_DEFAULT_OPTS='--height 40% --reverse --border' My usages To Search in command history: CTRL + R To Change directory: cd + (CTRL + T) # type cd and then press CTRL plus T or cd ** + TAB To find and open files in vim(or any editor): vim $(fzf) vim + (CTRL + T) vim ** + TAB vim $(fzf --preview 'cat {}') You can even search and open multiple files using tab

Super cool exec command

exec (built-in command) We can use exec for two use cases (as far as I learned today :-P) To create independent shell from current shell Usual way: $ bash # created a bash shell $ #do whatever you wanted $ exit # will go back to the shell which it created Using exec: $ exec bash #created a bash shell and killed the current shell $ #do whatever you wanted $ exit # will not go back to the shell which it created.

My Talks

Recent Top Deploying application securely using Hashicorp Vault Secret Management becomes tedious while dealing with a lot of users/applications. Let's try to deploy a sample application using Jenkins CI/CD to Kubernetes cluster with Hashicorp vault and understand how vault is able to solve it seamlessly. Power of Hashicorp Vault and it’s internal As we are started moving all our infra to cloud,secret management for the applications and also for the developers/infra admin is the challenging task.

About

I’m a coding freak who is always interested in solving/seeking challenging problems. Being in Thoughtworks, I found a lot of opportunities to work in various domains, few of them being - devops tools(docker, kubernetes, terraform, etc), mobile(both iOS and Android), micro services architecture. In all experiences on my projects, we have used a ton of open source tools and benefited a lot. After taking a lot from community I felt its only natural to give back as well.