Accreditation Bodies
Accreditation Bodies
Accreditation Bodies
Supercharge your career with our Multi-Cloud Engineer Bootcamp
KNOW MOREKubernetes is an open-source container orchestration platform designed to manage containerized workloads and services, making it easy to deploy and manage large-scale, microservices-based applications. Prepare better for your interview with the top Kubernetes interview questions curated by our experts. These Kubernetes Interview Questions and Answers will help you transition to a top DevOps job role. We have covered conceptual questions for freshers and experts and will help you answer different questions like the difference between config map and secret, monitoring, ways to test a manifest, automatic load balancing, rolling updates, self-healing, and horizontal scaling. With these Kubernetes Interview Questions with detailed answers as your resource, you can be confident that you will be well-prepared for your next interview. Prepare well and crack your interview with ease and confidence! Get well prepared with these interview questions and answers for Kubernetes.
Filter By
Clear all
Config maps ideally stores application configuration in a plain text format whereas Secrets store sensitive data like password in an encrypted format. Both config maps and secrets can be used as volume and mounted inside a pod through a pod definition file.
Config map:
kubectl create configmap myconfigmap --from-literal=env=dev
Secret:
echo -n ‘admin’ > ./username.txt echo -n ‘abcd1234’ ./password.txt kubectl create secret generic mysecret --from-file=./username.txt --from-file=./password.txt
When a node is tainted, the pods don't get scheduled by default, however, if we have to still schedule a pod to a tainted node we can start applying tolerations to the pod spec.
Apply a taint to a node:
kubectl taint nodes node1 key=value:NoSchedule
Apply toleration to a pod:
spec: tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule"
This is one of the most frequently asked Kubernetes interview questions for freshers in recent times.
The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims. Below is the spec to create the Persistent Volume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
You should be creating serviceAccount. A service account creates a token and tokens are stored inside a secret object. By default Kubernetes automatically mounts the default service account. However, we can disable this property by setting automountServiceAccountToken: false in our spec. Also, note each namespace will have a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
automountServiceAccountToken: false
A Pod always ensure that a container is running whereas the Job ensures that the pods run to its completion. Job is to do a finite task.
Examples:
kubectl run mypod1 --image=nginx --restart=Never kubectl run mypod2 --image=nginx --restart=onFailure ○ → kubectl get pods NAME READY STATUS RESTARTS AGE mypod1 1/1 Running 0 59s ○ → kubectl get job NAME DESIRED SUCCESSFUL AGE mypod1 1 0 19s
By default Deployment in Kubernetes using RollingUpdate as a strategy. Let's say we have an example that creates a deployment in Kubernetes
kubectl run nginx --image=nginx # creates a deployment ○ → kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 7s
Now let’s assume we are going to update the nginx image
kubectl set image deployment nginx nginx=nginx:1.15 # updates the image
Now when we check the replica sets
kubectl get replicasets # get replica sets
NAME DESIRED CURRENT READY AGE nginx-65899c769f 0 0 0 7m nginx-6c9655f5bb 1 1 1 13s
From the above, we can notice that one more replica set was added and then the other replica set was brought down
kubectl rollout status deployment nginx
# check the status of a deployment rollout
kubectl rollout history deployment nginx
# check the revisions in a deployment
○ → kubectl rollout history deployment nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 1 <none> 2 <none>
This is one of the most frequently asked Kubernetes interview questions and answers for freshers in recent times. Here is how to answer this.
We can introduce probes. A liveness probe with a Pod is ideal in this scenario.
A liveness probe always checks if an application in a pod is running, if this check fails the container gets restarted. This is ideal in many scenarios where the container is running but somehow the application inside a container crashes.
spec: containers: - name: liveness image: k8s.gcr.io/liveness args: - /server livenessProbe: httpGet: path: /healthz
A pod spec which runs the main container and a helper container that does some utility work, but that is not necessarily needed for the main container to work.
The adapter container will inspect the contents of the app's file, does some kind of restructuring and reformat it, and write the correctly formatted output to the location.
It connects containers with the outside world. It is a proxy that allows other containers to connect to a port on localhost.
reference:https://matthewpalmer.net/Kubernetes-app-developer/articles/multi-container-pod-design-patterns.html
The only difference between replication controllers and replica sets is the selectors. Replication controllers don't have selectors in their spec and also note that replication controllers are obsolete now in the latest version of Kubernetes.
apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend labels: app: guestbook tier: frontend spec:
# modify replicas according to your case
replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3
Reference: https://Kubernetes.io/docs/concepts/workloads/controllers/replicaset/
By declaring pods with the label(s) and by having a selector in the service which acts as a glue to stick the service to the pods.
kind: Service
apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80
Let's say if we have a set of Pods that carry a label "app=MyApp" the service will start routing to those pods.
Containers on same pod act as if they are on the same machine. You can ping them using localhost:port itself. Every container in a pod shares the same IP. You can `ping localhost` inside a pod. Two containers in the same pod share an IP and a network namespace and They are both localhost to each other. Discovery works like this: Component A's pods -> Service Of Component B -> Component B's pods and Services have domain names servicename.namespace.svc.cluster.local, the dns search path of pods by default includes that stuff, so a pod in namespace Foo can find a Service bar in same namespace Foo by connecting to `bar`
No, because there is only 1 replica, any changes to state full set would result in an outage. So rolling update of a StatefulSet would need to tear down one (or more) old pods before replacing them. In case 2 replicas, a rolling update will create the second pod, which it will not be succeeded, the PD is locked by first (old) running pod, the rolling update is not deleting the first pod in time to release the lock on the PDisk in time for the second pod to use it. If there's only one that rolling update goes 1 -> 0 -> 1.f the app can run with multiple identical instances concurrently, use a Deployment and roll 1 -> 2 -> 1 instead.
This is a common yet one of the most important Kubernetes interview questions and answers for experienced professionals, don't miss this one.
Use the correct auth mode with API server authorization-mode=Node,RBAC Ensure all traffic is protected by TLS Use API authentication (smaller cluster may use certificates but larger multi-tenants may want an AD or some OIDC authentication).
Make kubeless protect its API via authorization-mode=Webhook. Make sure the kube-dashboard uses a restrictive RBAC role policy Monitor RBAC failures Remove default ServiceAccount permissions Filter egress to Cloud API metadata APIs Filter out all traffic coming into kube-system namespace except DNS.
A default deny policy on all inbound on all namespaces is good practice. You explicitly allow per deployment.Use a podsecurity policy to have container restrictions and protect the Node Keep kube at the latest version.
kube-proxy does 2 things
The kube-proxy is a component that manages host sub-netting and makes services available to other components.Kubeproxy handles network communication and shutting down master does not stop a node from serving the traffic and kubeproxy works, in the same way, using a service. The iptables will route the connection to kubeproxy, which will then proxy to one of the pods in the service.kube-proxy translate the destination address to whatever is in the endpoints.
Container Runtime
Kubernetes Worker node is a machine where workloads get deployed. The workloads are in the form of containerised applications and because of that, every node in the cluster must run the container run time such as docker in order to run those workloads. You can have multiple masters mapped to multiple worker nodes or a single master having a single worker node. Also, the worker nodes are not gossiping or doing leader election or anything that would lead to odd-quantities. The role of the container run time is to start and managed containers. The kubelet is responsible for running the state of each node and it receives commands and works to do from the master. It also does the health check of the nodes and make sure they are healthy. Kubelet is also responsible for metric collectins of pods as well. The kube-proxy is a component that manages host subnetting and makes services available to other components.
Yes using replication controller but it may reschedule to another host if you have multiple nodes in the cluster
A replication controller is a supervisor for long-running pods. An RC will launch a specified number of pods called replicas and makes sure that they keep running. Replication Controller only supports the simple map-style `label: value` selectors. Also, Replication Controller and ReplicaSet aren't very different. You could think of ReplicaSet as Replication Controller. The only thing that is different today is the selector format. If pods are managed by a replication controller or replication set you can kill the pods and they'll be restarted automatically. The yaml definition is as given below:
well you need to have some way of triggering the reload. ether do a check every minute or have a reload endpoint for an api or project the configmap as a volume, could use inotify to be aware of the change. Depends on how the configmap is consumed by the container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file
The container does not restart. if the configmap is mounted as a volume it is updated dynamically. if it is an environment variable it stays as the old value until the container is restarted.volume mount the configmap into the pod, the projected file is updated periodically. NOT realtime. then have the app recognise if the config on disk has changed and reload
Yes, the scheduler will make sure (as long as you have the correct resources) that the number of desired pods are met. If you delete a pod, it will recreate it. Also deleting a service won't delete the Replica set. if you remove Service or deployment you want to remove all resources which Service created. Also having a single replica for a deployment is usually not recommended because you cannot scale out and are treating in a specific way
Any app should be `Ingress` -> `Service` -> `Deployment` -> (volume mount or 3rd-party cloud storage)
You can skip ingress and just have `LoadBalancer (service)` -> `Deployment` (or Pod but they don't auto restart, deployments do)
One of the most frequently posed Kubernetes scenario based interview questions, be ready for this conceptual question.
loadBalancerIP is not a core Kubernetes concept, you need to have a cloud provider or controller like metallb set up the loadbalancer IP. When MetalLB sees a Service of type=LoadBalancer with a ClusterIP created, MetalLB allocates an IP from its pool and assigns it as that Service's External LoadBalanced IP.the externalIP, on the other hand, is set up by kubelet so that any traffic that is sent to any node with that externalIP as the final destination will get routed.`ExternalIP` assumes you already have control over said IP and that you have correctly arranged for traffic to that IP to eventually land at one or more of your cluster nodes and its is a tool for implementing your own load-balancing. Also you shouldn't use it on cloud platforms like GKE, you want to set `spec.loadBalancerIP` to the IP you preallocated. When you try to create the service using .`loadBalancerIP` instead of `externalIP`, it doesn't create the ephemeral port and the external IP address goes to `<pending>` and never updates.
You need to add a liveness and readiness probe to query each container, if the probe fails, the entire pod will be restarted .add liveness object that calls any api that returns 200 to you from another container and both liveness and readiness probes run in infinite loops for example, If X depended to Y So add liveness in X that check the health of Y.Both readiness/liveness probes always have to run after the container has been started .kubelet component performs the liveness/readiness checks and set initialDelaySeconds and it can be anything from a few seconds to a few minutes depending on app start time. Below is the configuration spec
An ingress is an object that holds a set of rules for an ingress controller, which is essentially a reverse proxy and is used to (in the case of nginx-ingress for example) render a configuration file. It allows access to your Kubernetes services from outside the Kubernetes cluster. It holds a set of rules. An Ingress Controller is a controller. Typically deployed as a Kubernetes Deployment. That deployment runs a reverse proxy, the ingress part, and a reconciler, the controller part. the reconciler configures the reverse proxy according to the rules in the ingress object. Ingress controllers watch the k8s api and update their config on changes. The rules help to pass to a controller that is listening for them. You can deploy a bunch of ingress rules, but nothing will happen unless you have a controller that can process them.
LoadBalancer service -> Ingress controller pods -> App service (via ingress) -> App pods
Yes, hostnetwork for the daemonset gets you to the host, so an interface with an Anycast IP should work. You'll have to proxy the data through the daemonset.Daemonset allows you to run the pod on the host network, so anycast is possible.Daemonset allows us to run the pod on the host network At the risk of being pedantic, any pod can be specified to run on the host network. The only thing special about DaemonSet is you get one pod per host. Most of the issues with respect to IP space is solved by daemonsets. As kube-proxy is run as daemonset, the node has to be Ready for the kube-proxy daemonset to be up.
The ingress is exposing port 80 externally for the browser to access, and connecting to a service that listens on 8080. The ingress will listen on port 80 by default. An "ingress controller" is a pod that receives external traffic and handles the ingress and is configured by an ingress resource For this you need to configure ingress selector and if no 'ingress controller selector' is specified then no ingress controller will control the ingress.
simple ingress Config will look like
A staple in K8S interview questions and answers, be prepared to answer this one using your hands-on experience.
The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate .You can specify maxUnavailable and maxSurge to control the rolling update process. Rolling update is the default deployment strategy.kubectl rolling-update updates Pods and ReplicationControllers in a similar fashion. But, Deployments are recommended, since they are declarative, and have additional features, such as rolling back to any previous revision even after the rolling update is done.So for rolling updates to work as one may expect, a readiness probe is essential. Redeploying deployments is easy but rolling updates will do it nicely for me without any downtime. The way to make a rolling update of a Deployment and kubctl apply on it is as below
Yes, it would scale all of them, internally the deployment creates a replica set (which does the scaling), and then a set number of pods are made by that replica set. the pod is what actually holds both of those containers. and if you want to scale them independently they should be separate pods (and therefore replica sets, deployments, etc).so for hpa to work You need to specify min and max replicas and the threshold what percentage of cpu and memory you want your pods to autoscale..without having the manually run kubectl autoscale deployment ,you can use the below yaml file to do the same
A staple in Kubernetes advanced interview questions with answers, be prepared to answer this one using your hands-on experience. This is also one of the top interview questions to ask a Kubernetes engineer.
Deployments are for stateless services, you want to use a StatefulSet or just define 3+ pods without a replication controller at all. If you care about stable pod names and volumes, you should go for StatefulSet.Using statefulsets you can maintain which pod is attached to which disk.StatefulSets make vanilla k8s capable of keeping Pod state (things like IPs, etc) which makes it easy to run clustered databases. A stateful set is a controller that orchestrates pods for the desired state. StatefulSets formerly known as PetSets will help for the database if hosting your own. Essentially StatefulSet is for dealing with applications that inherently don't care about what node they run on, but need unique storage/state.
SIGKILL as immediately terminates the container and spawns a new one with OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately responsible for killing the processes.`SIGTERM` is sent to PID 1 and k8s waits for (default of 30 seconds) `terminationGracePeriodSeconds` before sending the `SIGKILL` or you can change that time with terminationGracePeriodSeconds in the pod. As long as your container will eventually exit, it should be fine to have a long grace period. If you want a graceful restart it would have to do it inside the pod. If you don't want it killed, then you shouldn't set a memory `limit` on the pod and there's not a way to disable it for the whole node. Also, when the liveness probe fails, the container will SIGTERM and SIGKILL after some grace period.
This, along with other interview questions for Kubernetes, is a regular feature in Kubernetes interviews, be ready to tackle it with the approach mentioned.
When we create a job spec, we can give --activeDeadlineSeconds flag to the command, this flag relates to the duration of the job, once the job reaches the threshold specified by the flag, the job will be terminated.
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: mycronjob
spec:
schedule: "*/1 * * * *"
activeDeadlineSeconds: 200
jobTemplate:
metadata:
name: google-check-job
spec:
template:
metadata:
name: mypod
spec:
restartPolicy: OnFailure
containers:
- name: mycontainer
image: alpine
command: ["/bin/sh"]
args: ["-c", "ping -w 1 google.com"]
use --dry-run flag to test the manifest. This is really useful not only to ensure if the yaml syntax is right for a particular Kubernetes object but also to ensure that a spec has required key-value pairs.
kubectl create -f <test.yaml> --dry-run
Let us now look at an example Pod spec that will launch an nginx pod
○ → cat example_pod.yaml --- apiVersion: v1 kind: Pod metadata: name: my-nginx namespace: mynamespace spec: containers: - name: my-nginx image: nginx ○ → kubectl create -f example_pod.yaml --dry-run pod/my-nginx created (dry run)
Rollback and rolling updates are a feature of Deployment object in the Kubernetes. We do the Rollback to an earlier Deployment revision if the current state of the Deployment is not stable due to the application code or the configuration. Each rollback updates the revision of the Deployment
○ → kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 1 15h
○ → kubectl rollout history deploy nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 1 <none> 2 <none>
kubectl undo deploy <deploymentname>
○ → kubectl rollout undo deploy nginx deployment.extensions/nginx
○ → kubectl rollout history deploy nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 2 <none> 3 <none>
We can also check the history of the changes by the below command
kubectl rollout history deploy <deploymentname>
Helm is a package manager which allows users to package, configure, and deploy applications and services to the Kubernetes cluster.
helm init # when you execute this command client is going to create a deployment in the cluster and that deployment will install the tiller, the server side of Helm
The packages we install through client are called charts. They are bundles of templatized manifests. All the templating work is done by the Tiller
helm search redis # searches for a specific application helm install stable/redis # installs the application helm ls # list the applications
Expect to come across this, one of the most important Kubernetes interview questions for experienced professionals in DevOps, in your next interviews.
Generally, in Kubenetes, a pod can have many containers. Init container gets executed before any other containers run in the pod.
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp annotations: pod.beta.Kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] } ]' spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo The app is running! && sleep 3600']
Pod Affinity ensures two pods to be co-located in a single node.
Node Affinity
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: Kubernetes.io/e2e-az-name operator: In values: - e2e-az1
Pod Affinity
apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1
The pod affinity rule says that the pod can be scheduled to a node only if that node is in the same zone as at least one already-running pod that has a label with key “security” and value “S1”
Reference: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/
When we take the node for maintenance, pods inside the nodes also take a hit. However, we can avoid it by using the below command
kubectl drain <nodename>
When we run the above command it marks the node unschedulable for newer pods then the existing pods are evicted if the API Server supports eviction else it deletes the pods
Once the node is up and running and you want to add it in rotation we can run the below command
kubectl uncordon <nodename>
Note: If you prefer not to use kubectl drain (such as to avoid calling to an external command, or to get finer control over the pod eviction process), you can also programmatically cause evictions using the eviction API.
More info: https://Kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
Just do port forward kubectl port-forward [nginx-pod-name] 80:80 kubectl port-forward [wordpress-pod-name] drupal-port:wordpress-port.
To make it permanent, you need to expose those through nodeports whenever you do kubectl port forward it adds a rule to the firewall to allow that traffic across nodes but by default that isn’t allowed since flannel or firewall probably blocks it.proxy tries to connect over the network of the apiserver host as you correctly found, port-forward on the other hand is a mechanism that the node kubelet exposes over its own API
One way is Init Containers are for one-shot tasks that start, run, end; all before the next init container or the main container start, but if a client in one container wants to consume some resources exposed by some server provided by another container or If the server ever crashes or is restarted, the client will need to retry connections. So the client can retry always, even if the server isn't up yet. The best way is sidecar pattern_ are where one container is the Main one, and other containers expose metrics or logs or encrypted tunnel or somesuch. In these cases, the other containers can be killed when the Main one is done/crashed/evicted.
A must-know for anyone looking for top Kubernetes practical interview questions, this is one of the frequently asked Kubernetes basic interview questions.
Restarting kubelet, which has to happen for an upgrade will cause all the Pods on the node to stop and be started again. It’s generally better to drain a node because that way Pods can be gracefully migrated, and things like Disruption Budgets can be honored. The problem is that `kubectl` keeps up with the state of all running pods, so when it goes away the containers don’t necessarily die, but as soon as it comes back up, they are all killed so `kubectl` can create a clean slate. As kubelet communicates with the apiserver, so if something happens in between of upgrade process, rescheduling of pods may take place and health checks may fail in between the process. During the restart, the kubelet will stop querying the API, so it won’t start/stop containers, and Heapster won’t be able to fetch system metrics from cAdvisor. Just make sure it’s not down for too long or the node will be removed from the cluster!
The service selects apps based on labels, so if no pods have appropriate labels, the service has nothing to route and labels can be anything you like. Since all pod names should be unique, you can just set the labels as the pod name. Since statesets create the same pods multiple times, they won't be configured with distinct labels you could use to point disparate services to the correct pod. If you gave the pods their own labels manually it will work. Also, service selects pods based on selector as well their location label as well Below .yaml file of Grafana dashboard service shows the same
metadata:
labels:
spec:
ports:
If you are mounting the secret as a volume into your pod, when the secret is updated the content will be updated in your pod, without the pod restarting. It's up to your application to detect that change and reload, or to write your own logic that rolls the pods if the secret changes .volumeMount controls what part of the secret volume is mounted into a particular container (defaults to the root, containing all those files, but can point to a specific file using `subPath`), and where in the container it should be mounted with `mountPath`.Example spec is below
Also, it depends on how the secret is consumed by a container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file. The container does not restart. if the secret is mounted as a volume it is updated dynamically. if it is an environment variable it stays as the old value until the container is restarted
By using a service object. reason being, if the database pod goes away, it's going to come up with a different name and IP address. Which means the connection string would need to be updated every time, managing that is difficult. The service proxies traffic to pods and it also helps in load balancing of traffic if you have multiple pods to talk to. It has its own IP and as long as service exists pod referencing this service in upstream will work and if the pods behind the service are not running, a pod will not see that and will try to forward the traffic but it will return a 502 bad gateway.So just defined the Service and then bring up your Pods with the proper label so the Service will pick them up.
You can attach an image pull secret to a service account. Any pod using that service account (including default) can take advantage of the secret.you can bind the pullSecret to your pod, but you’re still left with having to create the secret every time you make a namespace.
Also, you can Create the rc/deployment manually and either specify the imagepullsecret or a service account that has the secret or add the imagepullsecret to the default service account, in which case you'd be able to use `kubectl run` and not have to make any manual changes to the manifest. Depending on your environment and how secret this imagepullsecret is, will change how you approach it.
configmaps are always mounted read-only. if you need to modify a configmap in a pod, you should copy it from the configmap mount to a regular file in the pod and then modify it. To solve this issue we should use an init container to mount the configmap, copy the configmap into an `emptyDir` volume and share the volume with the main container.
configmaps are mounted read-only so that you can't touch the files. when the master configmap changes the mounted file also changes. so if you were to modify the local mounted file, it would be overwritten anyways.
Don't be surprised if this question pops up as one of the top interview questions on Kubernetes in your next interview.
if the config map is mounted into the pod as a volume, it will automatically update not instantly and the files will change inside the container. If it is an environment variable it stays as the old value until the container is restarted
For example: create a new config.yaml with your custom values
Then create a pod definition, referencing the ConfigMap
Config maps ideally stores application configuration in a plain text format whereas Secrets store sensitive data like password in an encrypted format. Both config maps and secrets can be used as volume and mounted inside a pod through a pod definition file.
Config map:
kubectl create configmap myconfigmap --from-literal=env=dev
Secret:
echo -n ‘admin’ > ./username.txt echo -n ‘abcd1234’ ./password.txt kubectl create secret generic mysecret --from-file=./username.txt --from-file=./password.txt
When a node is tainted, the pods don't get scheduled by default, however, if we have to still schedule a pod to a tainted node we can start applying tolerations to the pod spec.
Apply a taint to a node:
kubectl taint nodes node1 key=value:NoSchedule
Apply toleration to a pod:
spec: tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule"
This is one of the most frequently asked Kubernetes interview questions for freshers in recent times.
The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims. Below is the spec to create the Persistent Volume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
You should be creating serviceAccount. A service account creates a token and tokens are stored inside a secret object. By default Kubernetes automatically mounts the default service account. However, we can disable this property by setting automountServiceAccountToken: false in our spec. Also, note each namespace will have a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
automountServiceAccountToken: false
A Pod always ensure that a container is running whereas the Job ensures that the pods run to its completion. Job is to do a finite task.
Examples:
kubectl run mypod1 --image=nginx --restart=Never kubectl run mypod2 --image=nginx --restart=onFailure ○ → kubectl get pods NAME READY STATUS RESTARTS AGE mypod1 1/1 Running 0 59s ○ → kubectl get job NAME DESIRED SUCCESSFUL AGE mypod1 1 0 19s
By default Deployment in Kubernetes using RollingUpdate as a strategy. Let's say we have an example that creates a deployment in Kubernetes
kubectl run nginx --image=nginx # creates a deployment ○ → kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 7s
Now let’s assume we are going to update the nginx image
kubectl set image deployment nginx nginx=nginx:1.15 # updates the image
Now when we check the replica sets
kubectl get replicasets # get replica sets
NAME DESIRED CURRENT READY AGE nginx-65899c769f 0 0 0 7m nginx-6c9655f5bb 1 1 1 13s
From the above, we can notice that one more replica set was added and then the other replica set was brought down
kubectl rollout status deployment nginx
# check the status of a deployment rollout
kubectl rollout history deployment nginx
# check the revisions in a deployment
○ → kubectl rollout history deployment nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 1 <none> 2 <none>
This is one of the most frequently asked Kubernetes interview questions and answers for freshers in recent times. Here is how to answer this.
We can introduce probes. A liveness probe with a Pod is ideal in this scenario.
A liveness probe always checks if an application in a pod is running, if this check fails the container gets restarted. This is ideal in many scenarios where the container is running but somehow the application inside a container crashes.
spec: containers: - name: liveness image: k8s.gcr.io/liveness args: - /server livenessProbe: httpGet: path: /healthz
A pod spec which runs the main container and a helper container that does some utility work, but that is not necessarily needed for the main container to work.
The adapter container will inspect the contents of the app's file, does some kind of restructuring and reformat it, and write the correctly formatted output to the location.
It connects containers with the outside world. It is a proxy that allows other containers to connect to a port on localhost.
reference:https://matthewpalmer.net/Kubernetes-app-developer/articles/multi-container-pod-design-patterns.html
The only difference between replication controllers and replica sets is the selectors. Replication controllers don't have selectors in their spec and also note that replication controllers are obsolete now in the latest version of Kubernetes.
apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend labels: app: guestbook tier: frontend spec:
# modify replicas according to your case
replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3
Reference: https://Kubernetes.io/docs/concepts/workloads/controllers/replicaset/
By declaring pods with the label(s) and by having a selector in the service which acts as a glue to stick the service to the pods.
kind: Service
apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80
Let's say if we have a set of Pods that carry a label "app=MyApp" the service will start routing to those pods.
Containers on same pod act as if they are on the same machine. You can ping them using localhost:port itself. Every container in a pod shares the same IP. You can `ping localhost` inside a pod. Two containers in the same pod share an IP and a network namespace and They are both localhost to each other. Discovery works like this: Component A's pods -> Service Of Component B -> Component B's pods and Services have domain names servicename.namespace.svc.cluster.local, the dns search path of pods by default includes that stuff, so a pod in namespace Foo can find a Service bar in same namespace Foo by connecting to `bar`
No, because there is only 1 replica, any changes to state full set would result in an outage. So rolling update of a StatefulSet would need to tear down one (or more) old pods before replacing them. In case 2 replicas, a rolling update will create the second pod, which it will not be succeeded, the PD is locked by first (old) running pod, the rolling update is not deleting the first pod in time to release the lock on the PDisk in time for the second pod to use it. If there's only one that rolling update goes 1 -> 0 -> 1.f the app can run with multiple identical instances concurrently, use a Deployment and roll 1 -> 2 -> 1 instead.
This is a common yet one of the most important Kubernetes interview questions and answers for experienced professionals, don't miss this one.
Use the correct auth mode with API server authorization-mode=Node,RBAC Ensure all traffic is protected by TLS Use API authentication (smaller cluster may use certificates but larger multi-tenants may want an AD or some OIDC authentication).
Make kubeless protect its API via authorization-mode=Webhook. Make sure the kube-dashboard uses a restrictive RBAC role policy Monitor RBAC failures Remove default ServiceAccount permissions Filter egress to Cloud API metadata APIs Filter out all traffic coming into kube-system namespace except DNS.
A default deny policy on all inbound on all namespaces is good practice. You explicitly allow per deployment.Use a podsecurity policy to have container restrictions and protect the Node Keep kube at the latest version.
kube-proxy does 2 things
The kube-proxy is a component that manages host sub-netting and makes services available to other components.Kubeproxy handles network communication and shutting down master does not stop a node from serving the traffic and kubeproxy works, in the same way, using a service. The iptables will route the connection to kubeproxy, which will then proxy to one of the pods in the service.kube-proxy translate the destination address to whatever is in the endpoints.
Container Runtime
Kubernetes Worker node is a machine where workloads get deployed. The workloads are in the form of containerised applications and because of that, every node in the cluster must run the container run time such as docker in order to run those workloads. You can have multiple masters mapped to multiple worker nodes or a single master having a single worker node. Also, the worker nodes are not gossiping or doing leader election or anything that would lead to odd-quantities. The role of the container run time is to start and managed containers. The kubelet is responsible for running the state of each node and it receives commands and works to do from the master. It also does the health check of the nodes and make sure they are healthy. Kubelet is also responsible for metric collectins of pods as well. The kube-proxy is a component that manages host subnetting and makes services available to other components.
Yes using replication controller but it may reschedule to another host if you have multiple nodes in the cluster
A replication controller is a supervisor for long-running pods. An RC will launch a specified number of pods called replicas and makes sure that they keep running. Replication Controller only supports the simple map-style `label: value` selectors. Also, Replication Controller and ReplicaSet aren't very different. You could think of ReplicaSet as Replication Controller. The only thing that is different today is the selector format. If pods are managed by a replication controller or replication set you can kill the pods and they'll be restarted automatically. The yaml definition is as given below:
well you need to have some way of triggering the reload. ether do a check every minute or have a reload endpoint for an api or project the configmap as a volume, could use inotify to be aware of the change. Depends on how the configmap is consumed by the container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file
The container does not restart. if the configmap is mounted as a volume it is updated dynamically. if it is an environment variable it stays as the old value until the container is restarted.volume mount the configmap into the pod, the projected file is updated periodically. NOT realtime. then have the app recognise if the config on disk has changed and reload
Yes, the scheduler will make sure (as long as you have the correct resources) that the number of desired pods are met. If you delete a pod, it will recreate it. Also deleting a service won't delete the Replica set. if you remove Service or deployment you want to remove all resources which Service created. Also having a single replica for a deployment is usually not recommended because you cannot scale out and are treating in a specific way
Any app should be `Ingress` -> `Service` -> `Deployment` -> (volume mount or 3rd-party cloud storage)
You can skip ingress and just have `LoadBalancer (service)` -> `Deployment` (or Pod but they don't auto restart, deployments do)
One of the most frequently posed Kubernetes scenario based interview questions, be ready for this conceptual question.
loadBalancerIP is not a core Kubernetes concept, you need to have a cloud provider or controller like metallb set up the loadbalancer IP. When MetalLB sees a Service of type=LoadBalancer with a ClusterIP created, MetalLB allocates an IP from its pool and assigns it as that Service's External LoadBalanced IP.the externalIP, on the other hand, is set up by kubelet so that any traffic that is sent to any node with that externalIP as the final destination will get routed.`ExternalIP` assumes you already have control over said IP and that you have correctly arranged for traffic to that IP to eventually land at one or more of your cluster nodes and its is a tool for implementing your own load-balancing. Also you shouldn't use it on cloud platforms like GKE, you want to set `spec.loadBalancerIP` to the IP you preallocated. When you try to create the service using .`loadBalancerIP` instead of `externalIP`, it doesn't create the ephemeral port and the external IP address goes to `<pending>` and never updates.
You need to add a liveness and readiness probe to query each container, if the probe fails, the entire pod will be restarted .add liveness object that calls any api that returns 200 to you from another container and both liveness and readiness probes run in infinite loops for example, If X depended to Y So add liveness in X that check the health of Y.Both readiness/liveness probes always have to run after the container has been started .kubelet component performs the liveness/readiness checks and set initialDelaySeconds and it can be anything from a few seconds to a few minutes depending on app start time. Below is the configuration spec
An ingress is an object that holds a set of rules for an ingress controller, which is essentially a reverse proxy and is used to (in the case of nginx-ingress for example) render a configuration file. It allows access to your Kubernetes services from outside the Kubernetes cluster. It holds a set of rules. An Ingress Controller is a controller. Typically deployed as a Kubernetes Deployment. That deployment runs a reverse proxy, the ingress part, and a reconciler, the controller part. the reconciler configures the reverse proxy according to the rules in the ingress object. Ingress controllers watch the k8s api and update their config on changes. The rules help to pass to a controller that is listening for them. You can deploy a bunch of ingress rules, but nothing will happen unless you have a controller that can process them.
LoadBalancer service -> Ingress controller pods -> App service (via ingress) -> App pods
Yes, hostnetwork for the daemonset gets you to the host, so an interface with an Anycast IP should work. You'll have to proxy the data through the daemonset.Daemonset allows you to run the pod on the host network, so anycast is possible.Daemonset allows us to run the pod on the host network At the risk of being pedantic, any pod can be specified to run on the host network. The only thing special about DaemonSet is you get one pod per host. Most of the issues with respect to IP space is solved by daemonsets. As kube-proxy is run as daemonset, the node has to be Ready for the kube-proxy daemonset to be up.
The ingress is exposing port 80 externally for the browser to access, and connecting to a service that listens on 8080. The ingress will listen on port 80 by default. An "ingress controller" is a pod that receives external traffic and handles the ingress and is configured by an ingress resource For this you need to configure ingress selector and if no 'ingress controller selector' is specified then no ingress controller will control the ingress.
simple ingress Config will look like
A staple in K8S interview questions and answers, be prepared to answer this one using your hands-on experience.
The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate .You can specify maxUnavailable and maxSurge to control the rolling update process. Rolling update is the default deployment strategy.kubectl rolling-update updates Pods and ReplicationControllers in a similar fashion. But, Deployments are recommended, since they are declarative, and have additional features, such as rolling back to any previous revision even after the rolling update is done.So for rolling updates to work as one may expect, a readiness probe is essential. Redeploying deployments is easy but rolling updates will do it nicely for me without any downtime. The way to make a rolling update of a Deployment and kubctl apply on it is as below
Yes, it would scale all of them, internally the deployment creates a replica set (which does the scaling), and then a set number of pods are made by that replica set. the pod is what actually holds both of those containers. and if you want to scale them independently they should be separate pods (and therefore replica sets, deployments, etc).so for hpa to work You need to specify min and max replicas and the threshold what percentage of cpu and memory you want your pods to autoscale..without having the manually run kubectl autoscale deployment ,you can use the below yaml file to do the same
A staple in Kubernetes advanced interview questions with answers, be prepared to answer this one using your hands-on experience. This is also one of the top interview questions to ask a Kubernetes engineer.
Deployments are for stateless services, you want to use a StatefulSet or just define 3+ pods without a replication controller at all. If you care about stable pod names and volumes, you should go for StatefulSet.Using statefulsets you can maintain which pod is attached to which disk.StatefulSets make vanilla k8s capable of keeping Pod state (things like IPs, etc) which makes it easy to run clustered databases. A stateful set is a controller that orchestrates pods for the desired state. StatefulSets formerly known as PetSets will help for the database if hosting your own. Essentially StatefulSet is for dealing with applications that inherently don't care about what node they run on, but need unique storage/state.
SIGKILL as immediately terminates the container and spawns a new one with OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately responsible for killing the processes.`SIGTERM` is sent to PID 1 and k8s waits for (default of 30 seconds) `terminationGracePeriodSeconds` before sending the `SIGKILL` or you can change that time with terminationGracePeriodSeconds in the pod. As long as your container will eventually exit, it should be fine to have a long grace period. If you want a graceful restart it would have to do it inside the pod. If you don't want it killed, then you shouldn't set a memory `limit` on the pod and there's not a way to disable it for the whole node. Also, when the liveness probe fails, the container will SIGTERM and SIGKILL after some grace period.
This, along with other interview questions for Kubernetes, is a regular feature in Kubernetes interviews, be ready to tackle it with the approach mentioned.
When we create a job spec, we can give --activeDeadlineSeconds flag to the command, this flag relates to the duration of the job, once the job reaches the threshold specified by the flag, the job will be terminated.
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: mycronjob
spec:
schedule: "*/1 * * * *"
activeDeadlineSeconds: 200
jobTemplate:
metadata:
name: google-check-job
spec:
template:
metadata:
name: mypod
spec:
restartPolicy: OnFailure
containers:
- name: mycontainer
image: alpine
command: ["/bin/sh"]
args: ["-c", "ping -w 1 google.com"]
use --dry-run flag to test the manifest. This is really useful not only to ensure if the yaml syntax is right for a particular Kubernetes object but also to ensure that a spec has required key-value pairs.
kubectl create -f <test.yaml> --dry-run
Let us now look at an example Pod spec that will launch an nginx pod
○ → cat example_pod.yaml --- apiVersion: v1 kind: Pod metadata: name: my-nginx namespace: mynamespace spec: containers: - name: my-nginx image: nginx ○ → kubectl create -f example_pod.yaml --dry-run pod/my-nginx created (dry run)
Rollback and rolling updates are a feature of Deployment object in the Kubernetes. We do the Rollback to an earlier Deployment revision if the current state of the Deployment is not stable due to the application code or the configuration. Each rollback updates the revision of the Deployment
○ → kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 1 15h
○ → kubectl rollout history deploy nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 1 <none> 2 <none>
kubectl undo deploy <deploymentname>
○ → kubectl rollout undo deploy nginx deployment.extensions/nginx
○ → kubectl rollout history deploy nginx deployment.extensions/nginx REVISION CHANGE-CAUSE 2 <none> 3 <none>
We can also check the history of the changes by the below command
kubectl rollout history deploy <deploymentname>
Helm is a package manager which allows users to package, configure, and deploy applications and services to the Kubernetes cluster.
helm init # when you execute this command client is going to create a deployment in the cluster and that deployment will install the tiller, the server side of Helm
The packages we install through client are called charts. They are bundles of templatized manifests. All the templating work is done by the Tiller
helm search redis # searches for a specific application helm install stable/redis # installs the application helm ls # list the applications
Expect to come across this, one of the most important Kubernetes interview questions for experienced professionals in DevOps, in your next interviews.
Generally, in Kubenetes, a pod can have many containers. Init container gets executed before any other containers run in the pod.
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp annotations: pod.beta.Kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] } ]' spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo The app is running! && sleep 3600']
Pod Affinity ensures two pods to be co-located in a single node.
Node Affinity
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: Kubernetes.io/e2e-az-name operator: In values: - e2e-az1
Pod Affinity
apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1
The pod affinity rule says that the pod can be scheduled to a node only if that node is in the same zone as at least one already-running pod that has a label with key “security” and value “S1”
Reference: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/
When we take the node for maintenance, pods inside the nodes also take a hit. However, we can avoid it by using the below command
kubectl drain <nodename>
When we run the above command it marks the node unschedulable for newer pods then the existing pods are evicted if the API Server supports eviction else it deletes the pods
Once the node is up and running and you want to add it in rotation we can run the below command
kubectl uncordon <nodename>
Note: If you prefer not to use kubectl drain (such as to avoid calling to an external command, or to get finer control over the pod eviction process), you can also programmatically cause evictions using the eviction API.
More info: https://Kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
Just do port forward kubectl port-forward [nginx-pod-name] 80:80 kubectl port-forward [wordpress-pod-name] drupal-port:wordpress-port.
To make it permanent, you need to expose those through nodeports whenever you do kubectl port forward it adds a rule to the firewall to allow that traffic across nodes but by default that isn’t allowed since flannel or firewall probably blocks it.proxy tries to connect over the network of the apiserver host as you correctly found, port-forward on the other hand is a mechanism that the node kubelet exposes over its own API
One way is Init Containers are for one-shot tasks that start, run, end; all before the next init container or the main container start, but if a client in one container wants to consume some resources exposed by some server provided by another container or If the server ever crashes or is restarted, the client will need to retry connections. So the client can retry always, even if the server isn't up yet. The best way is sidecar pattern_ are where one container is the Main one, and other containers expose metrics or logs or encrypted tunnel or somesuch. In these cases, the other containers can be killed when the Main one is done/crashed/evicted.
A must-know for anyone looking for top Kubernetes practical interview questions, this is one of the frequently asked Kubernetes basic interview questions.
Restarting kubelet, which has to happen for an upgrade will cause all the Pods on the node to stop and be started again. It’s generally better to drain a node because that way Pods can be gracefully migrated, and things like Disruption Budgets can be honored. The problem is that `kubectl` keeps up with the state of all running pods, so when it goes away the containers don’t necessarily die, but as soon as it comes back up, they are all killed so `kubectl` can create a clean slate. As kubelet communicates with the apiserver, so if something happens in between of upgrade process, rescheduling of pods may take place and health checks may fail in between the process. During the restart, the kubelet will stop querying the API, so it won’t start/stop containers, and Heapster won’t be able to fetch system metrics from cAdvisor. Just make sure it’s not down for too long or the node will be removed from the cluster!
The service selects apps based on labels, so if no pods have appropriate labels, the service has nothing to route and labels can be anything you like. Since all pod names should be unique, you can just set the labels as the pod name. Since statesets create the same pods multiple times, they won't be configured with distinct labels you could use to point disparate services to the correct pod. If you gave the pods their own labels manually it will work. Also, service selects pods based on selector as well their location label as well Below .yaml file of Grafana dashboard service shows the same
metadata:
labels:
spec:
ports:
If you are mounting the secret as a volume into your pod, when the secret is updated the content will be updated in your pod, without the pod restarting. It's up to your application to detect that change and reload, or to write your own logic that rolls the pods if the secret changes .volumeMount controls what part of the secret volume is mounted into a particular container (defaults to the root, containing all those files, but can point to a specific file using `subPath`), and where in the container it should be mounted with `mountPath`.Example spec is below
Also, it depends on how the secret is consumed by a container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file. The container does not restart. if the secret is mounted as a volume it is updated dynamically. if it is an environment variable it stays as the old value until the container is restarted
By using a service object. reason being, if the database pod goes away, it's going to come up with a different name and IP address. Which means the connection string would need to be updated every time, managing that is difficult. The service proxies traffic to pods and it also helps in load balancing of traffic if you have multiple pods to talk to. It has its own IP and as long as service exists pod referencing this service in upstream will work and if the pods behind the service are not running, a pod will not see that and will try to forward the traffic but it will return a 502 bad gateway.So just defined the Service and then bring up your Pods with the proper label so the Service will pick them up.
You can attach an image pull secret to a service account. Any pod using that service account (including default) can take advantage of the secret.you can bind the pullSecret to your pod, but you’re still left with having to create the secret every time you make a namespace.
Also, you can Create the rc/deployment manually and either specify the imagepullsecret or a service account that has the secret or add the imagepullsecret to the default service account, in which case you'd be able to use `kubectl run` and not have to make any manual changes to the manifest. Depending on your environment and how secret this imagepullsecret is, will change how you approach it.
configmaps are always mounted read-only. if you need to modify a configmap in a pod, you should copy it from the configmap mount to a regular file in the pod and then modify it. To solve this issue we should use an init container to mount the configmap, copy the configmap into an `emptyDir` volume and share the volume with the main container.
configmaps are mounted read-only so that you can't touch the files. when the master configmap changes the mounted file also changes. so if you were to modify the local mounted file, it would be overwritten anyways.
Don't be surprised if this question pops up as one of the top interview questions on Kubernetes in your next interview.
if the config map is mounted into the pod as a volume, it will automatically update not instantly and the files will change inside the container. If it is an environment variable it stays as the old value until the container is restarted
For example: create a new config.yaml with your custom values
Then create a pod definition, referencing the ConfigMap
Submitted questions and answers are subjecct to review and editing,and may or may not be selected for posting, at the sole discretion of Knowledgehut.
Get a 1:1 Mentorship call with our Career Advisor
By tapping submit, you agree to KnowledgeHut Privacy Policy and Terms & Conditions