Kubernetes: Endpoint

An Endpoint in Kubernetes is an object in Kubernetes that implements a Service. They represent the set of IPs for Pods that match a particular service. They are handy when you want to check that a Service actually matches some running pods. It provides a way to represent a remote system as an internal service. If an Endpoint is empty, then it means that there are no matching pods and something is most likely wrong with your service definition. Each Service object in Kubernetes has an associated Endpoint object. A corresponding endpoint with the same name of the Service is created when a Service is created with a Selector. The Endpoint of each Service in the cluster are being watched by the Kube-proxy for the purpose of routing network request made to virtual IPs to the endpoint which implements the Service. We can manually create Endpoints for a Service that has no Selector. Below is an example of a manually created Endpoint and it’s associated Service.

endpoint.yaml

kind: Endpoints
apiVersion: v1
metadata:
  name: test-endpoint
subsets:
  - addresses:
      - ip: 1.2.3.4
    ports:
      - port: 9376

service.yaml

kind: Service
apiVersion: v1
metadata:
  name: test-endpoint
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376

What is CNCF and who is a CNA?

Cloud Native Computing Foundation (CNCF) was found in late 2015 and is part of Linux Foundation. It is a governing body that solves the issues faced by any Cloud Native application. It acts as a vendor neutral home for fast growing containerization projects. It aims to ensure that the projects are generally available to the community and help protect against legal action from those who have release the projects.

CNCF ensures that projects are interoperatable and reliable. They help drive the evolution of projects and promote the projects. They do this by promoting Meetups like Kubecon. all projects under CNCF are container-based and some of these projects are Kubernetes, prometheus, gRPC, Linkerd and CoreDNS. The projects can fall into one of these three categories Graduated, Incubating and Sandbox. Kubernetes is the first project to ever graduate the CNCF and recently Prometheus became the second to graduate CNCF.

We also Have Cloud Native Ambassador (CNA) who advocate for Cloud Native applications. they are people who are passionate about CNCF projects and are recognized for their expertise and willingness to help spread their knowledge on cloud native applications to the community. A CNA could be a blogger, open-source contributor, evangelist and Meetup organizer.

Kubernetes with AWS EKS

AWS Elastic Container Service for Kubernetes (EKS) was announced as Generally Available (GA) recently. prior to EKS, AWS customers used tools like KOPS, Kubeadm and Terraform to provision Kuberntes cluster in AWS. EKS is now a solution also for creating clusters on AWS. EKS was designed to simplify the process involved in setting up Kubernetes, scaling kubernetes and making easier networking configuration. In this design AWS has decided to fully manage the Kubernetes control plane by providing high availability using AWS multi AZ. Here is an image that I got from the AWS documentation page that illustrates the cluster setup.

The Control plane parts such as ETCD and the API-server are spread across three Availability Zones (AZ). The Kubernetes command line tool “kubectl” uses the IAM roles created by the AWS user and Heptio Authenticator to access and authenticate the cluster.There is also the Eksctl which is a command line tool that can be used for creating EKS clusters in minutes with a one line command eksctl create cluster . AWS uses Elastic Network Interfaces for CNI purposes in EKS to allocate IPs to Pods.

A couple of really interesting things about EKS is that the cluster is highly available and can be scaled based on utilization. We also have the option of using the different types of load balancers in AWS such as Elastic Load Balancer (ELB), Network Load Balancer (NLB) and Application Load Balancer (ALB) for the purpose of routing traffic to pods running in the cluster. EKS also uses Route 53 for exposing services running in the cluster and allows this services to be reachable using Route 53 DNS records.

EKS introduces some limitations to what customers can utilize or manage in the cluster. With EKS, users are limited to CNI provided by AWS, Kubernetes API cannot be extended as we do not have control of API configuration and certain controllers are not implemented that kubernetes uses might need such as Pod Disruption Budget(PDB).

I will be making a post showing how to setup EKS cluster later on.

How to setup Go on Mac using Homebrew

In this post I will be showing how to install Go on Mac using Homebrew. Pre-requisite for installing go in this demo is that you have Homebrew installed. Let’s get started.

step 1: Install Go with Homebrew

tunde:~ babatundeolu-isa$ brew update
tunde:~ babatundeolu-isa$ brew install golang

step 2: Now that we have Go installed, let us confirm the version of Go installed

tunde:~ babatundeolu-isa$ go version
go version go1.11 darwin/amd64

step 3: let us setup our workspace.

tunde:~ babatundeolu-isa$ mkdir Aquatribe-go
tunde:~ babatundeolu-isa$ cd Aquatribe-go/

Next we will create the bin ,pkg and src directories. the bin directory will contain all compiled binaries, pkg will contain Go package objects and src will contain our Go projects

tunde:Aquatribe-go babatundeolu-isa$ mkdir -p src pkg bin

step 4: We will set up the Environmental variables needed for our Go projects.
to do this, we will need to open the bash_profile file located in the Home directory and add the workspace we are using for our Go work.

tunde:~ babatundeolu-isa$ vi .bash_profile

next add the following lines that exports the variables for Go workspace.

#for GO Programming
export GOPATH="$HOME/Aquatribe-go"
export PATH="$HOME/Aquatribe-go/bin:$PATH"

step 5: let us try running a simple go program.
first, create the following file in the src directory and save as intro.go

package main
import "fmt"

func main() {
    fmt.Printf("Aquatribe provides solution for Containers, Severless and Cloud. \n")
}

Let us run the code.

tunde:src babatundeolu-isa$ go run intro.go
Aquatribe provides solution for Containers, Severless and Cloud.

We can also use the go install command to compile the code and save in the bin directory.

tunde:src babatundeolu-isa$ go install intro.go

For more details on Go refer to the links below.
https://golang.org
https://godoc.org
https://tour.golang.org

Kubernetes: Quality of Service (QoS) for pods

Quality of service for pod is a concept of resource in Kubernetes that allows us to compute resource management. Cluster administrators can prioritize pods to allocate resource to based of the QoS class. We classify pods as of the following: Guaranteed, Burstable and BestEffort. The QoS classes are used to make decisions about scheduling and evicting pods. A guaranteed pod has the highest priority. A pod is assigned the Guaranted class when the Limit and Request for both CPU and Memory is the same. If a container specifies a limit for CPU or Memory without specifying request for Memory and CPU, Kubernetes automatically assigns a Memory request that matches that limit. Burstable is assigned to a pod when the limit specified is above the request.The Guaranteed Class has a greater priority than Burstable class. Lastly, the BestEffort class is assigned to a pod that does not have limit or request set for both CPU and Memory. Below are examples of pods with each classes.

Guaranteed pod example

apiVersion: v1
kind: Pod
metadata:
  name: guaranteed-class
spec:
  containers:
  - name: qos-example
    image: nginx
    resources:
      limits:
        cpu: 500m
        memory: 350Mi
      requests:
        cpu: 500m
        memory: 350Mi

Burstable pod example

kind: Pod
apiVersion: v1
metadata:
  name: Burstable-class
spec:
  containers:
  - name: burstable-example
    image: nginx
    resources:
      limits:
        memory: "500Mi"
      requests:
        memory: "300Mi"

BestEffort pod example

apiVersion: v1
kind: Pod
metadata:
  name: besteffort
spec:
  containers:
  - name: besteffort-example
    image: nginx

Kubernetes with Amazon Web Service (AWS), Kubeadm and Calico.

In this post I will be demonstrating how to install Kubernetes on AWS platform using Kubeadm for bootstrapping the kubernetes cluster and Calico as container networking interface (CNI). The Kubernetes cluster for this setup will consist of a master node and two worker nodes both running Ubuntu 16.04 LTS. We will be running Kubernetes version 1.10.0 in our cluster. Let’s get started!

First, we need to setup our AWS environment which consist of the EC2, Security Group , and VPC configuration.
let us create how instance and configure security groups. on the EC2 dashboard page click on Launch Instance ->

We will choose Ubuntu Server 16.04 LTS AMI

We will select the T2.medium instance type for our Master node. User can choose instance type that fit their use case.

Next we will configure Instance details. Here, I have left the details as default. I am running in default VPC with the subnets being auto-assigned.

Next, we will configure storage. I have gone with  the defaults for the purpose of this demo.

Next, we will configure Tags. I have decided not to add any tags.

Next, we will the security Group. This part is crucial for setting up the cluster. The SG will determine what ports can communication within the cluster. We will need to allow port 6443 and allow all communication within the subnet (subnet CIDR can be gotten from the VPC CIDR that we are using). I have also allowed ssh traffic  from my IP and allowed HTTP and HTTPS traffic. I will not advise to use this configuration in a production environment as it is not secure.

Next we review and then launch. you will also be prompted to create  access-key.

go back to the  EC2 dashboard to view the state of Instance.

I have also named the first Instance created as Master.

Next we will re-run the instance creation steps to create the remaining two Ubuntu Server 16.04 LTS nodes. This time we will select the T2.micro instance type and use the same security group that we used for the first instance (node). The access key can remain the same also.

now that we have setup the three instances (nodes) . let’s us SSH into each instance with the public key we downloaded earlier and setup the Kubernetes v1.10.

Let us SSH into Master and run the below commands.

we will first need to change the public key permission.

tunde:~ babatundeolu-isa$ chmod 400 kubecalico.pem
tunde:~ babatundeolu-isa$ ssh -i "kubecalico.pem" ubuntu@ec2-18-218-194-202.us-east-2.compute.amazonaws.com

In our master node, we will need to disable swap. This is because when Swap is enabled Kubernetes does not know how to handle memory eviction

ubuntu@ip-172-31-7-182:~$ swapoff -a
ubuntu@ip-172-31-7-182:~$ sudo apt-get update
ubuntu@ip-172-31-7-182:~$ sudo apt-get install -y apt-transport-https

we will now change to root user.

ubuntu@ip-172-31-7-182:~$ sudo su
root@ip-172-31-7-182:/home/ubuntu# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
root@ip-172-31-7-182:/home/ubuntu# cat </etc/apt/sources.list.d/kubernetes.list
 deb http://apt.kubernetes.io/ kubernetes-xenial main
> EOF

Install Docker, Kubeadm, Kubelet, Kubectl and Kubernetes-cni

root@ip-172-31-7-182:/home/ubuntu# apt update
root@ip-172-31-7-182:/home/ubuntu# apt-get install -y docker.io
root@ip-172-31-7-182:/home/ubuntu# apt-get install -y kubectl=1.10.0-00
root@ip-172-31-7-182:/home/ubuntu# apt-get install -y kubelet=1.10.0-00
root@ip-172-31-7-182:/home/ubuntu# apt-get install -y kubeadm=1.10.0-00
root@ip-172-31-7-182:/home/ubuntu# apt-get install -y kubernetes-cni

we have now installed all the tools needed to setup the Kubernetes cluster.

next we will bootstrap master node with Kubeadm. we have also used the --pod-network-cidr flag to specify the CIDR block that Calico (CNI) will use and the --kubernetes-version flag to indicate the version of kubernetes we want to run.

root@ip-172-31-7-182:/home/ubuntu# kubeadm init --pod-network-cidr=192.168.0.0/16  --kubernetes-version stable-1.10

we should get an output similar to below contain the token required for worker nodes join Master.

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.7.182:6443 --token 3b4od2.xznw4lvat26ckjdq --discovery-token-ca-cert-hash sha256:157cfec2e60e38debe1fb206138830a3745dd7a86498f5e1be6f06a8887565bc
root@ip-172-31-7-182:/home/ubuntu# exit

run the next set of commands as regular user.

ubuntu@ip-172-31-7-182:~$ mkdir -p $HOME/.kube
ubuntu@ip-172-31-7-182:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
ubuntu@ip-172-31-7-182:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

we will now install Calico CNI. it is required to enable pod communication within the cluster.

ubuntu@ip-172-31-7-182:~$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

verify pods are running in the cluster and the Master node is ready.

ubuntu@ip-172-31-7-182:~$ kubectl get pods --all-namespaces
ubuntu@ip-172-31-7-182:~$ kubectl get node
NAME              STATUS    ROLES     AGE       VERSION
ip-172-31-7-182   Ready     master    14m       v1.10.0

As you can see our Master node is ready and running Kubernetes version 1.10.0.
let us now add the remaining two worker nodes to the Kubernetes cluster using the token provided by Kubeadm during Master setup.

we will need to install Docker, Kubelet and Kubeadm on the worker nodes.

ubuntu@ip-172-31-38-135:~$ swapoff -a
ubuntu@ip-172-31-38-135:~$ sudo apt-get update
ubuntu@ip-172-31-38-135:~$ sudo apt-get install -y apt-transport-https
ubuntu@ip-172-31-38-135:~$ sudo su 
root@ip-172-31-38-135:/home/ubuntu# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
root@ip-172-31-38-135:/home/ubuntu# cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF 
root@ip-172-31-38-135:/home/ubuntu# apt update
root@ip-172-31-38-135:/home/ubuntu# apt-get install -y docker.io
root@ip-172-31-38-135:/home/ubuntu# apt-get install -y kubelet=1.10.0-00
root@ip-172-31-38-135:/home/ubuntu# apt-get install -y kubeadm=1.10.0-00
root@ip-172-31-38-135:/home/ubuntu# 
ubuntu@ip-172-31-38-135:~$ 


next we will join the master node using the token generated earlier.

root@ip-172-31-38-135:/home/ubuntu# kubeadm join 172.31.7.182:6443 --token 3b4od2.xznw4lvat26ckjdq --discovery-token-ca-cert-hash sha256:157cfec2e60e38debe1fb206138830a3745dd7a86498f5e1be6f06a8887565bc
[preflight] Running pre-flight checks.
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "172.31.7.182:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.31.7.182:6443"
[discovery] Requesting info from "https://172.31.7.182:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.7.182:6443"
[discovery] Successfully established connection with API Server "172.31.7.182:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

we will run the same worker step to join the last worker node to the cluster.

Once that is done, let us go ahead and run kubectl get nodes on the master to the nodes have joined the cluster.

ubuntu@ip-172-31-7-182:~$ kubectl get node
NAME               STATUS    ROLES     AGE       VERSION
ip-172-31-36-82    Ready         2m        v1.10.0
ip-172-31-38-135   Ready         8m        v1.10.0
ip-172-31-7-182    Ready     master    35m       v1.10.0

let us view the cluster health.

ubuntu@ip-172-31-7-182:~$ kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

We have setup Kubernetes cluster running version 1.10.0 using Kubeadm on AWS and used Calico as CNI of choice. Practice using Kubernetes using this cluster we have created together.
 

 

 

 

Heptio’s Sonobuoy

I got to use Heptio’s Sonobuoy an open source tool for conformance testing and I have to say it’s truly amazing how it works. Over the past year, Heptio has developed a couple of other really interesting and cool tool like Ksonnet and Ark. In this post I will focus on Sonobuoy which is a really exciting tool. Sonobuoy allows us to do conformance and diagnostic of a Kubernetss cluster in a non-destructive manner. It allows us to be able to tell if our cluster is going to behave as intended or as desired. Sonobuoy can be deployed on multiple environment like AWS, GKE, Minikube and AKS. Currently Sonobuoy supports Kubernetes version 1.9 and later. When Sonobuoy is ran on a cluster, it’s diagnostic provides detailed information on state of the cluster that is customizable. We also have a UI tool called Sonobuoy Scanner. It is a browser-based tool and helps us quickly deploy sonobuoy. The Sonobuoy Scanner currently runs only conformance test. Sonobuoy also has a CLI that holds detailed information on systemd logs from each nodes, cluster node, e2e conformance tests and kubernetes API.

Before we can run sonobuoy we need to have a Kubernetes cluster that is up and running. After Sonobuoy is ran in our cluster it creates a result directory that is compressed. The results contains details about our cluster configuration in JSON format. The results folder contains files like host, plugins, resources and server version with each folders having child directories containing reliable information about the cluster.

Sonobuoy is one of the many interesting things Heptio has going on. One really interesting to look at is Heptio Kubernetes Subscription (HKS). With HKS, we can get support for platform agnostics Kubernetes clusters. We can now be free of being locked to a single cloud provider in our clusters. Later on, I will have something with more details for HKS.

Jenkins X

Jenkins X is an open-source project that is gaining a lot of popularity in the DevOps community. The project Jenkins X is a CI/CD tool that facilitates application build and deployment automation. It is a subproject of Jenkins and is supported on multiple cloud platforms such as AWS, GCP and Azure. It can also be set up on minikube. Jenkins X makes your life easier by automating the configuration of different tools like Kubernetes, Jenkins, Git, Helm and other tools. This applications are coupled together to enhance the behavior and efficiency of Jenkins X.

Part of the Jenkins X installation include Helm, Monocular, Jenkins, Hipster, Docker-registry, Chartmuseum, Nexus and Mongodb. It automates the setup of pipelines using what we call a JenkinsFile. A Jenkinsfile is a configuration that defines a pipeline. By default, it creates a staging and production environment and this environments are equivalent of Kubernetes Namespaces. Applications are deployed to environments as specified by the user and for each application that is deployed a Git repo is created for versioning purposes.

It also implements something called Promotion that allows us to move applications from one environment to another. We also have the JX command line tool that allows us to interact with Jenkins X. It is used to manage resource within the cluster.

Jenkins X is a project that is still growing, solves a lot of problems and shows a lot of promise. I like the implementation of Preview that allows us to see what Pull Request changes will look like before merging to Master. Jenkins X is a highly container-centric tool so will work well for companies that work heavily with containerized applications. Some of the downside to it is that the documentation is still in its early phase and not a lot of architectural concepts are explained in detail.

Later on I will be doing a demo on how to run Jenkins X on Minikube and on a cloud platform.

Kubernetes: Pod Security Policy

A Pod Security Policy (PSP) is a kubernetes resources that allows us to set security limitations on pods across the cluster. In order to use a PSP, the controller needs to be enabled. The purpose of a PSP is to govern the behavior of pods in the cluster. PSP operates in a cluster wide level. If you read my previous post or have heard about security context in the past, you might be wondering how its different from PSP. The difference biggest difference is that PSP operates on a cluster level and configuration no longer need to be attached to pod manifest. So it basically automates the enforcement of security context. Below is an example of a PSP that stops the creation of privileged pods.

 
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: hello
spec:
  privileged: false 
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  volumes:
  - '*'

Prometheus on Kubernetes.

Prometheus is an open-source monitoring system that allows us to gather application performance statistics. It gives us the ability to collect application-specific metrics. This metrics can be further used to control the behavior of software. Prometheus graduates within the CNCF and is the second project to make it. It is heavily used in the Kubernetes Community.

The architecture include a prometheus server that collects information from remote locations and stores them in a database. Promethus also comes with an alert manager that is configured to send alerts or triggered a behavior in our cluster based on the alert. Prometheus uses cAdvisor to gather information on nodes within a cluster. Prometheus can be used with an application like Grafana which provides a beautiful visualization and real time analytics dashboard.