Welcome to Knative.

Knative is an open source project started by Google, Redhat, IBM, pivotal and other leaders. It is built on kubernetes and Istio and can run on GKE, PKS, Minikube, AKS and other environments. Knative extends Kubernetes to help build modern, source-centric, and container-based serverless workloads. It provides developers a simpler way to deploy serverless-style functions, applications, and containers on Kubernetes and Istio.

Knative implements primitives for function and application development through a series of CRD’s and associated controllers in Kubernetes which provides for declarative specification of what a developer wants. The Knative project is designed to drive improvement in Kubernetes and Istio. Here is a view of the design I got from Knative GitHub page.

There are three major components or features in Knative which are build, serving and eventing. The build component serves the purpose of allowing developers to build images from a defined sources which could be a git repository or container registry. The serving component uses Istio to handle routing of traffic among revisions, automates flows from containers to running functions and serves as a request-driven compute that can scale to zero. Lastly, the Eventing component allows functions to subscribe to events. It does the management and delivery of events.

With Knative, we no longer need to worry about orchestrating source-to-container workflows, binding running services to event workspace, routing and managing traffic during deployment.

Kubernetes: Istio


Istio is an open source tool founded by Google, IBM, and Lyft that provides a uniform way to connect, manage and secure microservices. It is also part of Cloud Native Computing Foundation (CNCF) project and currently only supports Kubernetes and Consul-based environments . Istio allows us to manage traffic flow across microservices, enforce access policies and aggregate telemetry data without administration from the node running the microservice. It manages how service-to-service communicate within a cluster. Istio uses what we refer to as Custom Resource Definition (CRD) to extend Kubernetes API. Its most common applicability is for traffic Management. Istio also can also be used to provide insights into how applications are working and performance metrics. It can be used with Grafana to provide visualization. Istio has a command line Interface (CLI) that is used to deploy and manage back-end services.

Istio has several components which are Envoy, Mixer, Pilot, Citadel and Node Agent. Envoy is a sidecar container that runs in each container for the purpose of handling ingress/egress traffic between service-to-service in the cluster. Mixer enforces policies such as authentication, request tracing and telemetry collection at an infrastructure level. It is a central component that is leveraged by the proxies and containers for the purpose of enforcing policies. Pilot is responsible for setting up the Mixer and Envoy at runtime. Citadel is responsible for issuing certificates and rotation of the certificated generated. Lastly, the Node Agent serves the purpose of automating keys and certificates generated on a node level.

Istio as a Kubernetes service mesh manages services by providing dynamic routing, operation metrics, load balancing and idempotency. Istio ensures application reliability as it ensures the resiliency of microservices running in the cluster. In production environments as we scale our microservices from tens to thousands we need a tool like istio to handle management and operational complexities.

Kubernetes: Fluentd

Fluentd is an open source tool that acts as a data collector for unified logging layer. It is a a software used to collect and aggregate logs. Fluentd is one of the most widely used logging tool in the DevOps Community. It is written in Ruby and C. Most of us are familiar with Elasticsearch, Logstash and Kibana (ELK) stack. Fluentd fits into this stack also but replaces Logstash. Fluentd ensures that log messages can stream from end to end. It allows us to create structured logs from any kind of application. Elasticsearch search acts as a document store and full text search engine that is scalable, fast and reliable. Kibana works in tandem with Elasticsearch to provide visualization into the data stored and indexed by Elasticsearch. The three components come together to form a very useful and excellent log analysis application. this helps solve problems on large deployment of applications on platforms like Kubernetes for example.

Kubernetes: Ingress

Ingress is an API object that allows communication to services inside a cluster from external sources. It manages inbound connections. Ingress allows us to do Name-based virtual hosting, load-balance traffic , do SSL Termination and provide external based URL’s. Ingress allows us to route and serve traffic based on request host or path. Currently, Ingress only supports HTTP based rules. Before Ingress can be used, an Ingress Controller needs to be installed in the cluster. Multiple Ingress Controller can be installed in the cluster. The Ingress Controller uses the rules specified in the Ingress configuration to handle traffic from outside the cluster. Ingress can be configured to use TLS also by creating a secret that contains SSL certificate .crt and .key files key values in the secret manifest. We then specify the secret as part of the Ingress configuration. Below are examples of Ingress manifest.

Here is an example of multiple host entries under one Ingress configuration.

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: hello
spec:
  rules:
  - host: aquatribe.com
    http:
      paths:
      - path: /about
        backend:
          serviceName: about-service
          servicePort: 80
  - host: eksmanual.com
    http:
      paths:
      - path: /manual
        backend:
          serviceName: eks-service
          servicePort: 80

Here is one with a single host and multiple paths.

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: hello-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target:
spec:
  rules:
    - host: aquatribe.com
      http:
        paths:
          - path: /about
            backend:
              serviceName: about-svc
              servicePort: 8080
          - path: /consult
            backend:
              serviceName: consult-svc
              servicePort: 8080

How to update Service Account of a deployment that is running

I have created a deployment and a service Account for the purposes of this demo. next we will update the running deployment.

Jacksparrow:~ babatundeolu-isa$ kubectl get deploy
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     2         2         2            2           7s
Jacksparrow:~ babatundeolu-isa$ kubectl get sa
NAME      SECRETS   AGE
default   1         23h
hello     1         6m
apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2018-08-08T17:35:47Z
    generateName: nginx-65899c769f-
    labels:
      pod-template-hash: "2145573259"
      run: nginx
    name: nginx-65899c769f-wgrcm
    namespace: default
    ownerReferences:
    - apiVersion: extensions/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: nginx-65899c769f
      uid: 11acd5ce-9b2a-11e8-be52-025000000001
    resourceVersion: "55566"
    selfLink: /api/v1/namespaces/default/pods/nginx-65899c769f-wgrcm
    uid: 7c3aeed8-9b31-11e8-be52-025000000001
  spec:
    containers:
    - image: nginx
      imagePullPolicy: Always
      name: nginx
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-brtl4
        readOnly: true
    dnsPolicy: ClusterFirst
    nodeName: docker-for-desktop
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
Jacksparrow:~ babatundeolu-isa$ kubectl set sa deployment nginx hello
deployment.apps "nginx" serviceaccount updated
apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2018-08-08T17:38:45Z
    generateName: nginx-769dbc9c-
    labels:
      pod-template-hash: "32586757"
      run: nginx
    name: nginx-769dbc9c-w2mcs
    namespace: default
    ownerReferences:
    - apiVersion: extensions/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: nginx-769dbc9c
      uid: e67fb09e-9b31-11e8-be52-025000000001
    resourceVersion: "55828"
    selfLink: /api/v1/namespaces/default/pods/nginx-769dbc9c-w2mcs
    uid: e685c3fc-9b31-11e8-be52-025000000001
  spec:
    containers:
    - image: nginx
      imagePullPolicy: Always
      name: nginx
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: hello-token-tl6f8
        readOnly: true
    dnsPolicy: ClusterFirst
    nodeName: docker-for-desktop
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: hello
    serviceAccountName: hello
    terminationGracePeriodSeconds: 30

The service account has been update with hello.

Pod Preset

Pod Preset is used to inject data into pods at creation time. It allows us to add volumes, secret, configMaps and environmental variables into pods without defining them in the pod template. we use pod preset because it allow us to isolate certain information or data from people who do not need to see them. It makes the application loosely coupled. To use pod preset we must make sure that the feature is enabled in the API configuration and the admission controller exist. We then create a pod preset object and assign pods to the given pod preset using label selectors. Once the label on the pod-preset is matched by the application pod then the given data in the pod preset is injected into the pod. Below is an example of a pod preset and pod manifest that leverages pod preset functionality. We have created a pod preset object that injects environment variable named WEB_PORT and a secret named web-secret.

apiVersion: settings.k8s.io/v1alpha1 
kind: PodPreset
metadata:
  name: web-PodPreset
spec:
  selector:
    matchLabels:
      app: web
  env:
    - name: WEB_PORT 
      value: "80" 
  envFrom:
    - secretKeyRef: 
      name: web-secret
  volumeMounts: 
    - mountPath: /cache
      name: cache-volume
  volumes: 
    - name: cache-volume
      emptyDir: {}
apiVersion: v1
kind: Pod
metadata:
  name: hello-website
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80

Kubernetes: Pod Priority

Pod priority is a new resource in Kubernetes that allow you to interfere with how pods are scheduled. It was added back in v1.8 and is currently in the beta phase of kubernetes version 1.11. The idea of pod priority is simple. It allows us to basically give preferential treatment to specified or selected applications. The selection is made by the cluster administrator

For example, if compute resources are exhausted in the cluster and the pod assigned a high priority needs to be scheduled, the scheduler will evict pods from the cluster to accommodate the application with highest priority.Pod priority affects the scheduling order of pods in the cluster based on how priority is set. Pods are either preempted or evicted by the scheduler when making priority decisions.

When a pod is giving a priority higher than that of other pods in the cluster, the scheduler makes sure that at all times when that pods needs to be scheduled it will always be first on the scheduling queue regardless of other pods pending prior to its creation.

To assign pod priority, a priority class object needs to be created and then referenced in the podspec of the pod that needs to be assigned a priority. Here is a sample on how to create a pod priority class object:

Kind: PriorityClass
apiVersion: scheduling.k8s.io/v1alpha1
metadata:
  name: high-priority-app
value: 999999
globalDefault: false
description: "priority for critical apps"

next we create the pod that uses the priority class object

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
spec:
  containers:
  - name: nginx
    image: nginx
  PriorityClassName: high-priority-app

Create Deployment and Expose imperatively using a single line command

Here is how to create a deployment and exposing it using a single line command.

Jacksparrow:~ babatundeolu-isa$ kubectl run nginx --image=nginx --expose --port=80
service "nginx" created
deployment.apps "nginx" created
Jacksparrow:~ babatundeolu-isa$

At this point we have created an Nginx deployment and created a service for the deployment.

Concourse CI/CD tool

Concourse is an open-source container-centric CI/CD tool written in Golang that is heavily used by Pivotal and is gradually gaining recognition in the community. Users can specify how automation should take place by creating a YAML based manifest that declares how it should take place. It has support for integrations such as GitHub, JFROG Artifactory, Slack and others that can be found on the official concourse page.

Couple of interesting thing I really like about concourse .

  • User Interface which gives visibility into running builds and pipelines and help facilitate debugging .
  • Anything can be automated or scripted in any language of your choice
  • Ability to intercept containers that are being used to run builds and make changes or troubleshoot errors.
  • Easy to scale.
  • All containers are ephemeral and reproducible
  • support for vault and different authentication types.
  • log visibility

Just like other automation tools, you can specify jobs and have tasks within the jobs specified in a pipeline (Pipeline->Job->Task). We also have resource and resource types that help specify what to be part of the builds. Concourse has three major components , the Air Traffic Controller which serves the purpose of providing web UI and Build scheduling , the TSA which register workers  and the worker that serves as runtime environment for containers and manages cache.

Since this is a fairly new project, there are still a lot that can be improved with the application. I noticed a couple of container volume issues and concourse workers intermittently getting stalled during builds. The great thing is concourse has an active discord community that can help with issues encountered.

Concourse  can be set up locally on Mac, windows and Linux OS’s for development purposes. Concourse is infrastructure agnostic so It can also be setup on AWS, google cloud and azure. I will be making a post on setting up concourse locally on Mac OS later on.

Passing the Certified Kubernetes Application Developer Exam


I took the Certified Kubernetes Application Developer exam a week after taking the Certified Kubernetes Administrator (CKA) Exam and I passed. The CKAD is created by the Cloud Native Computing Foundation (CNCF). The CKAD exam is purely practical and it costs $300. There were 19 questions in the Exam with different weights that covered what was specified in the Exam curriculum. I had a much different experience with the exam than I did with the CKA. I had 2 hours to tackle all the questions so I felt the need to be faster than I was when I took the CKA. The passing score of the exam is 66%.

The CKAD exam questions were more focused on troubleshooting deployments, pods and manipulating the behavior of an existing resources. It had more to do with debugging and had 4 clusters. I believe the resources I used for the CKA were more than enough to get me prepared for the CKAD. Some other resources that can help in passing the exam are katacoda.com website, Sébastien Goasguen kubernetes cookbook, kubernetes.io cheatsheet page. I suggest also knowing kubectl commands that will allow you to create deployments and services interactively on the command line because it will save you time. Time management is key and also making questions with higher point a priority during the exam.

During the CKA exam try not to spend too much time on questions that require too much time. There is a note feature in the exam menu that you can write down questions you will like to come back to. Understanding the use of
Kubectl edit, Kubectl set, kubectl explain and kubectl describe commands will be useful for debugging and making quick changes. Also remember to create the resources specified in the namespace requested in the questions.

I faced Similar problems like the ones from the CKA like Gateone terminal freezing while my time kept running and also a problem I had was making assumptions before carefully reading questions during the Exam. I was creating resources that had already been created and all I had to do was just make changes to the yaml file. I had that problem because I had taken the CKA and expected things to go a certain way.

I personally think the CKAD is not as difficult as the CKA. The Two hours  provided to take the exam might have made it a little more challenging . I will suggest taking the CKAD exam before taking the CKA because it will give you the exam mindset to work fast.