Accreditation Bodies
Accreditation Bodies
Accreditation Bodies
Supercharge your career with our Multi-Cloud Engineer Bootcamp
KNOW MORESoftware development and IT teams use DevOps methodology to automate and integrate processes to enable them to deliver applications and services continuously. DevOps engineers are shaping how software and IT products reach the market today. These top interview questions on DevOps equip you to answer questions on CI/CD pipeline, git commit object, ‘git rebase’ work, client-level git hooks and the difference between a git reset and git revert. These sample DevOps interview questions are the ones candidates get asked frequently, irrespective of whether them being a beginner, intermediate or expert professionals. Prepare yourself with the top interview questions on DevOps and land the best jobs as a DevOps engineer, DevOps test analyst, and other top DevOps profiles. So, let us jump to the list of the top DevOps questions with answers.
Filter By
Clear all
DevOps is an approach to collaborate the development and operations teams for a better, bug-free continuous delivery and integration of the source code.
DevOps is about automating the entire SDLC (Software Development Life Cycle) process with the implementation of CI/CD practices.
CI/CD are the Continuous integration and continuous deployment methodologies.
Every source code check-in will automatically build and unit test the entire code against a
production like environment and continuously deployed to production environment after it passes its automated tests.
That eliminates the long feedback, bug-fix, and product enhancements loops between every
Release.
Every team takes the accountability of the entire product right from the requirement analysis to documentation to coding, testing in development environments, code deployment and continuous improvements in terms of bugs and feedback from reviewers and the customers.
Considering DevOps to be an ideology towards achieving a quality product, every organization has its own guidelines and approach towards it. Some of the popular tools I have used are:
This is one of the most frequently asked basic DevOps interview questions for freshers in recent times.
Git is a Distributed Version Control System; used to logically store and backup the entire history of how your project source code has developed, keeping a track of every version change of the code.
Git facilitates very flexible and efficient branching and merging of your code with other collaborators.Being distributed git is extremely fast and more reliable as every developer has his own local copy of the entire repository.
Git allows you to undo the mistakes in the source code at different tiers of its architecture namely- Working directory, Staging (Index) area, Local repository, and Remote repository.
Using Git we can always get an older version of our source code and work on it.Git tracks every bit of data as it checksums every file into unique hash codes referring to them via pointers.
To summarize Git is the most efficient and widely used VCS, used by major companies like Linux, Google, Facebook, Microsoft, Twitter, LinkedIn, Netflix, Android, Amazon, IBM, Apple IOS to name a few…
When a project repository is initialized to be a git repository, git stores all its metadata in a hidden folder “.git” under the project root directory.
Git repository is a collection of objects.
Git has 4 types of objects – blobs, trees, tags, and commits.
Every commit creates a new commit object with a unique SHA-1 hash_id.
Each commit object has a pointer reference to the tree object, its parent object, author, committer and the commit message.
Diagram: Single Commit object
To see the commit log message along with the textual diff of the code, run:
git show <commit_id>
Divya1@Divya:initialRepo [master] $git show f9354cb commit f9354cb08d91e80cabafd5b54d466b6055eb2927 Author: divya bhushan <divya_bhushan@hotmail.com> Date: Mon Feb 11 23:39:24 2019 +0100 Add database logs. diff --git a/logs/db.log b/logs/db.log new file mode 100644 index 0000000..f8854b0 --- /dev/null +++ b/logs/db.log @@ -0,0 +1 @@ +database logs
To read a commit object git has ‘git cat-file’ utility.
Divya1@Divya:initialRepo [master] $git cat-file -p f9354cb tree 2a85825b8d20918350cc316513edd9cc289f8349 parent 30760c59d661e129329acfba7e20c899d0d7d199 author divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100 committer divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100 Add database logs.
A tree object is like an OS directory that stores references to other directories and files (blob type).
Divya1@Divya:initialRepo [master] $git cat-file -p 2a85825b8d20918350cc316513edd9cc289f8349 100755 blob 054acd444517ad5a0c1e46d8eff925e061edf46c README.md 040000 tree dfe42cbaf87e6a56b51dab97fc51ecedfc969f39 code 100644 blob e08d4579f39808f3e2830b5da8ac155f87c0621c dockerfile 040000 tree 014e65a65532dc16a6d50e0d153c222a12df4742 logs
Reset Vs Revert
This is one of the most frequently asked DevOps coding interview questions and answers for freshers in recent times.
There are scenarios wherein one would like to merge a quickfix or feature branch with not a huge commit history into another ‘dev’ or ‘uat’ branch and yet maintain a linear history.
A non-fast forward ‘git merge’ would result in a diverged history. Also when one wants the feature merged commits to be the latest commits; ‘git rebase’ is an appropriate way of merging the two branches.
‘git rebase’ replays the commits on the current branch and place them over the tip of the rebased branch.Since it replays the commit ids, rebase rewrites commit objects and create a new object id(SHA-1). Word of caution: Do not use it if the history is on release/production branch and being shared on the central server. Limit the rebase on your local repository only to rebase quickfix or feature branches.
Steps:
Say there is a ‘dev’ branch that needs a quick feature to be added along with the test cases from ‘uat’ branch.
Develop the new feature and make commits in ‘new-feature’ branch.
[dev ] $git checkout -b new-feature [new-feature ] $ git add lib/commonLibrary.sh && git commit -m “Add commonLibrary file” Divya1@Divya:rebase_project [new-feature] $git add lib/commonLibrary.sh && git commit -m 'Add commonLibrary file'Divya1@Divya:rebase_project [new-feature] $git add feature1.txt && git commit -m 'Add feature1.txt' Divya1@Divya:rebase_project [new-feature] $git add feature2.txt && git commit -m 'Add feature2.txt'
[dev] $ git merge uat
Divya1@Divya:rebase_project [dev] $git checkout new-featureDivya1@Divya:rebase_project [new-feature] $git rebase dev First, rewinding head to replay your work on top of it... Applying: Add commonLibrary file Applying: Add feature1.txt Applying: Add feature2.txt
Divya1@Divya:rebase_project [new-feature] $git checkout dev Divya1@Divya:rebase_project [dev] $git merge new-feature Updating 5044e24..3378815 Fast-forward feature1.txt | 1 + feature2.txt | 1 + lib/commonLibrary.sh | 16 ++++++++++++++++ 3 files changed, 18 insertions(+) create mode 100644 feature1.txt create mode 100644 feature2.txt create mode 100644 lib/commonLibrary.sh
this will result in linear history with ‘new-feature’ results being at the top and ‘dev’ commits being pushed later.
Divya1@Divya:rebase_project [dev] $git hist * 3378815 2019-02-14 | Add feature2.txt (HEAD -> dev, new-feature) [divya bhushan] * d3859c5 2019-02-14 | Add feature1.txt [divya bhushan] * 93b76f7 2019-02-14 | Add commonLibrary file [divya bhushan] * 5044e24 2019-02-14 | Merge branch 'uat' into dev [divya bhushan] |\ | * bb13fb0 2019-02-14 | End of uat work. (uat) [divya bhushan] | * 0ab2061 2019-02-14 | Start of uat work. [divya bhushan] * | a96deb1 2019-02-14 | End of dev work. [divya bhushan] * | 817544e 2019-02-14 | Start of dev work. [divya bhushan] |/ * 01ad76b 2019-02-14 | Initial project structure. (tag: v1.0, master) [divya bhushan]
NOTE: ‘dev’ will show a diverged commit history for ‘uat’ merge and a linear history for ‘new-feature’ merge.
Every source code deployment needs to be portable and compatible on every device and environment.
Applications and their run time environment such as libraries and other dependencies like binaries, jar files, configuration files etc.. are bundled up(packaged) in a Container.
Containers as a whole are portable, consistent and compatible with any environment.
In development words, a developer can run its application in any environment: dev, uat, preprod and production without worrying about the run-time dependencies of the application.
A developer writes code instructions to define all the applications and its dependencies in a file called a “Dockerfile”.Dockerfile is used to create a ‘Docker image’ using the ‘docker build <directory>’ command.The build command is run by the docker daemon.
When you run a Docker image “Containers” are created. Containers are runtime instances of a Docker image.
Image credit: docs.docker.com
Expect to come across this, one of the most important DevOps interview questions for experienced professionals in web development, in your next interviews.
--Get docker images from docker hub or your docker repository
docker pull busybox docker pull centos docker pull divyabhushan/myrepo Divya1@Divya:~ $docker pull divyabhushan/myrepo Using default tag: latest latest: Pulling from divyabhushan/myrepo 6cf436f81810: Pull complete 987088a85b96: Pull complete b4624b3efe06: Pull complete d42beb8ded59: Pull complete d08b19d33455: Pull complete 80d9a1d33f81: Pull complete Digest: sha256:c82b4b701af5301cc5d698d963eeed46739e67aff69fd1a5f4ef0aecc4bf7bbf Status: Downloaded newer image for divyabhushan/myrepo:latest
--List the docker images
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/myrepo latest 72a21c221add About an hour ago 88.1MB busybox latest 3a093384ac30 5 weeks ago 1.2MB centos latest 1e1148e4cc2c 2 months ago 202MB
--Create a docker container by running the docker image
--pass a shell argument : `uname -a`
Divya1@Divya:~ $docker run centos uname -a Linux c70fc2da749a 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
--Docker images can be built by reading a dockerfile
--build a new image : ‘newrepo’ with tag:1.0 from the dockerFiles/dockerfile
docker build -t newrepo:1.1 dockerFiles/
--Now create a container from the above image:
--List all the containers
--start the container
--List only the running containers
Puppet is a Configuration Management and deployment tool for administrative tasks. This tool helps in automating the provisioning, configuration, and management of Infrastructure and Systems.
In simple words:
Don't be surprised if this question pops up as one of the top interview questions on DevOps in your next interview.
Entire Server Infrastructure setup configurations are written in terms of codes and re-used on all the Puppet Server agent’s nodes(machines) that are connected via a Puppet master Server.
This is achieved by the use of code snippets called ‘manifests’; that are configuration files for every Server agent node.
Jenkins is a self-contained, open source automation server(tool) for continuous development.
Jenkins aids and automates CI/CD process.
It gets the checked in code from VCS like Git using a ‘git plugin’, build the source code, run test cases in a production-like environment and make the code release ready using ‘deploy’ plugin.
Sample Jenkins file
pipeline { agent { docker { image 'ubuntu:latest' } } stages { stage('build') { steps { sh 'uname -a' } } } }
pipeline { agent { docker { image 'ubuntu:latest' } } stages { stage('build') { steps { sh 'uname -a' } } stage('Test') { steps { sh './jenkins/scripts/test.sh' } } } }
We can mention some test conditions to run post the completion of stages in a pipeline.
Code snippet
post { always { echo “This block runs always !!!” } success { echo “This block runs when the stages has a success status” } unstable { echo “This block is run when the stages abort with an unstable status” } }
Here are the post conditions reserved for jenkinsfile:
Run the steps in the post section regardless of the completion status of the Pipeline’s or stage’s run.
Only run the steps in post if the current Pipeline’s or stage’s run has an "unstable" status, usually caused by test failures, code violations, etc.
Only run the steps in post if the current Pipeline’s or stage’s run has an “aborted” status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "success" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "failed" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a different completion status from its previous run.
Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status.
One of the most frequently posed DevOps scenario based interview questions, be ready for this conceptual question.
Continuous Integration is a development practice wherein developers regularly merge or integrate their code changes into a common shared repository very frequently (*).Every code check-in is then verified by automated build and automated test cases.
This approach helps to detect and fix the bugs early, improve software quality,reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.
By default, git does not allow you to delete a branch whose work has not yet been merged into the main branch.
To see the list of branches not merged with the checked out branch run:
Divya1@Divya:initialRepo [master] $git branch --no-merged dev
--If you try to delete this branch, git displays a warning:
Divya1@Divya:initialRepo [master] $git branch -d dev error: The branch 'dev' is not fully merged. If you are sure you want to delete it, run 'git branch -D dev'.
--If it is still deleted using the -D flag as:
Divya1@Divya:initialRepo [master] $git branch -D dev
--See the references log information
Divya1@Divya:initialRepo [master] $git reflog cb9da2b (HEAD -> master) HEAD@{0}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{1}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{2}: checkout: moving from master to master cb9da2b (HEAD -> master) HEAD@{3}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{4}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{5}: checkout: moving from uat to master 03224ed (uat) HEAD@{6}: checkout: moving from dev to uat
b834dc2 is the commit id when we jumped to ‘dev’ branch
Create a branch named ‘dev’ from this commit id again.
Divya1@Divya:initialRepo [master] $git checkout -b dev b834dc2 Switched to a new branch 'dev' Divya1@Divya:initialRepo [dev]
A good branching strategy is the one that adapts to your project and business needs. Every organization has a set of its own defined SDLC processes.
An example branching structural strategy that I have used in my project:
Guidelines:
All the steps will be mentioned in a Jenkins file on a branch ‘name’ condition.
git log --oneline <localBranch>..<origin/remoteBranch>
Your local git branch should be set up to track a remote branch.
Divya1@Divya:initialRepo [dev] $git branch -vv * dev b834dc2 [origin/dev] Add Jenkinsfile master b834dc2 [origin/master] Add Jenkinsfile
Reset ‘dev’ commit history to 3 commits behind using the command:
Divya1@Divya:initialRepo [dev] $git reset --soft HEAD~3 Divya1@Divya:initialRepo [dev] $git branch -vv * dev 30760c5 [origin/dev: behind 3] add source code auto build at every code checkin using docker images
Compare and list the missing logs in local ‘dev’ branch that are present in ‘origin/dev’
Divya1@Divya:initialRepo [dev] $git log --oneline dev..origin/dev b834dc2 (origin/master, origin/dev, master) Add Jenkinsfile c5e476c Rename 'prod' to 'uat'-break the build in Jenkings 6770b16 Add database logs.
Use ‘git pull’ to sync local ‘dev’ branch with the remote ‘origin/dev’ branch.
Git hooks are the instruction scripts that gets triggered before(pre) or post(after) certain actions or events such as a git command run.
#!/bin/sh #Library includes: . .git/hooks/hooks_library.lib # An example hook script to verify what is about to be committed. # Called by "git commit" with no arguments. The hook should # exit with non-zero status after issuing an appropriate message if # it wants to stop the commit. #Aim:Check for any Deleted file in the staging area, if any it stops you from commiting this snapshot. set_variables 1 $0 if [ "$(git status --short | grep '^D')" ];then echo "WARNING!!! Aborting the commit. Found Deleted files in the Staging area.\n" | tee -a $LOGFILE echo "`git status --short | grep '^D' | awk -F' ' '{print $2}'`\n" | tee -a $LOGFILE exit 1; else echo "[OK]: No deleted files, proceed to commit." | tee -a $LOGFILE exit 0; fi
Scenario how I implemented the hooks scripts to enforce certain pre-commit and post-commit test cases:
Step 1: Running .git/hooks/pre-commit script.
[OK]: No deleted files, proceed to commit. Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 2: Running .git/hooks/prepare-commit-msg script.
Get hooks scripts while cloning the repo. ISSUE#7092 Enter your commit message here. README code/install_hooks.sh code/runTests.sh database.log hooksScripts/commit-msg hooksScripts/hooks_library.lib hooksScripts/post-commit hooksScripts/pre-commit hooksScripts/pre-rebase hooksScripts/prepare-commit-msg newFile Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 3: Running .git/hooks/commit-msg script.
[OK]: Commit message has an ISSUE number Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 4: Running .git/hooks/post-commit script.
New commit made:
1c705d3 Get hooks scripts while cloning the repo. ISSUE#7092
hooksProj [dev] $git rebase master topic WARNING!!! upstream branch is master. You are not allowed to rebase on master The pre-rebase hook refused to rebase.
A code snippet demonstrating the use of a ‘pre-receive’ hook that is triggered just before a ‘push’ request on the Server, can be written to reject or allow the push operation.
localRepo [dev] $git push Enumerating objects: 3, done. Counting objects: 100% (3/3), done. Writing objects: 100% (2/2), 272 bytes | 272.00 KiB/s, done. Total 2 (delta 0), reused 0 (delta 0) remote: pre-recieve hook script remote: hooks/pre-receive: [NOK]- Abort the push command remote: To /Users/Divya1/OneDrive/gitRepos/remoteRepo/ ! [remote rejected] dev -> dev (pre-receive hook declined) error: failed to push some refs to '/Users/Divya1/OneDrive/gitRepos/remoteRepo/'
Install a new package in a container
docker run -it ubuntu root@851edd8fd83a:/# which yum --returns nothing root@851edd8fd83a:/# apt-get update root@851edd8fd83a:/# apt-get install -y yum root@851edd8fd83a:/# which yum /usr/bin/yum --Get the latest container id CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 851edd8fd83a ubuntu "/bin/bash" 6 minutes ago Exited (127) 3 minutes ago --base image changed docker diff 851edd8fd83a
Commit the changes in the container to create a new image.
Divya1@Divya:~ $docker commit 851edd8fd83a mydocker/ubuntu_yum sha256:630004da00cf8f0b8b074942caa0437034b0b6764d537a3a20dd87c5d7b25179
--List if the new image is listed
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker/ubuntu_yum latest 630004da00cf 20 seconds ago 256MB
FROM divyabhushan/myrepo:latest COPY hello.sh /home/hello.sh CMD ["bash", "/home/hello.sh"] CMD ["echo", "Dockerfile demo"] RUN echo "dockerfile demo" >> logfile
--Build an image from the dockerfile, tag the image name as ‘mydocker’
docker build -t mydocker dockerFiles/ docker build --tag <containerName> <dockerfile location> Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker latest aacc2e8eb26a 20 seconds ago 88.1MB Divya1@Divya:~ $docker run mydocker /home/divya Hello Divya Bye Divya
Write instructions in a dockerfile.
docker build -t learn_docker dockerFiles/
docker run -it learn_docker
--Tag the local image as:
<hub-user>/<repo-name>:[:<tag>]
Examples:
docker tag learn_docker divyabhushan/learn_docker:dev docker tag learn_docker divyabhushan/learn_docker:testing
--list the images for this container:
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/learn_docker develop 944b0a5d82a9 About a minute ago 88.1MB learn_docker dev1.1 944b0a5d82a9 About a minute ago 88.1MB divyabhushan/learn_docker dev d3e93b033af2 16 minutes ago 88.1MB divyabhushan/learn_docker testing d3e93b033af2 16 minutes ago 88.1MB Push the docker images to docker hub docker push divyabhushan/learn_docker:dev docker push divyabhushan/learn_docker:develop docker push divyabhushan/learn_docker:testing The push refers to repository [docker.io/divyabhushan/ learn_docker] 53ea43c3bcf4: Pushed 4b7d93055d87: Pushed 663e8522d78b: Pushed 283fb404ea94: Pushed bebe7ce6215a: Pushed latest: digest: sha256:ba05e9e13111b0f85858f9a3f2d3dc0d6b743db78880270524e799142664ffc6 size: 1362
Summarize:
Develop your application code and all other dependencies like the binaries, library files, downloadables required to run the application in the test environment. Bundle it all in a directory.
NOTE: This docker image has your application bundle = application code + dependencies + test run time environment exactly similar to your machine. Your application bundle is highly portable with no hassles.
A must-know for anyone looking for top DevOps coding interview questions, this is one of the frequently asked CI CD interview questions.
Docker provides a system prune command to remove stopped containers and dangling images.Dangling images are the ones which are not attached to any container.
Run the prune command as below:
docker system prune
WARNING! This will remove:
Are you sure you want to continue? [y/N]
There is also a better and controlled way of removing containers and images using the command:
Step 1: Stop the containers
docker stop <container_id>
Step 2: Remove the stopped container
docker rm container_id docker rm 6174664de09d
Step 3: Remove the images, first stop the container using those images and then
docker rmi <image_name>:[<tag>]
--give image name and tag
docker rmi ubuntu:1.0
--give the image id
docker rmi 4431b2a715f3
As the number of docker machines increases, there needs to be a system to manage them all. Docker Orchestration is a virtual docker manager and allows us to start, stop, pause, unpause or kill the docker nodes(machines).
Docker has an in-built utility called “docker swarm”.
Kubernetes is another popular and versatile docker orchestration system used. A cluster of dockers is called a ‘swarm’. Swarm turns a collection of docker engines into a single virtual docker engine.
In a swarm orchestration arrangement, one machine acts as a swarm manager that controls all the other machines connected to the cluster that acts as swarm nodes.
This is how I created a swarm of dockers and managed them on my machine:
We need docker services and docker machines to run these services on.Finally, we need a docker swarm to manage the docker nodes/machines
Create a docker swarm and manage the services on different nodes and port numbers.
Step 1: Create docker machines: manager, node1, node2, node3, node4
docker-machine create --driver virtualbox manager
docker-machine create --driver virtualbox node1 docker-machine create --driver virtualbox node2 docker-machine create --driver virtualbox node3 docker-machine create --driver virtualbox node4 --Every node is started as a virtualbox machine. --set docker machine ‘manager’ as active eval $(docker-machine env manager) --List the docker machines
Step 2: Create a docker swarm
--Initialize a swarm and add ‘manager’ to the swarm cluster using its ip address: 192.168.99.100
Step 3: Add the nodes as workers(or another manager) to the swarm
--Connect to each node and run the above swarm join command
There can be more than one ‘manager’ node in a swarm
--connect to node1 and join node1 to the swarm as a worker
docker-machine ssh node1
--List the nodes connected in the swarm
connect to manager node:
$docker-machine ssh manager
Step 4: From the ‘manager’ node create new docker services
docker-machine ssh manager
--Create service replicating them on more than 1 nodes and expose them on the mentioned port.
This command pulls the docker image from docker hub.
Step 5: list the docker services created also use -ps flag to view the node machines these services are running on.
--List the services that will be shared among different swarm nodes
Swarm randomly assigns nodes to the running services when we replicate the services.
--service ‘httpd’ running on 3 nodes: node1, node2 and node3
--service ‘couchbase’ is running on 2 nodes: node1 and manager at port: 8091
--’couchbase’ service can be accessed via ‘node1’ (ip: 192.168.99.101) and ‘manager’ (ip: 192.168.99.100) at port : 8091 as show below
Screenshots of the running services:
‘manager’ node can create/inspect/list/scale or remove a service.
Refer
docker service --help
Conclusion:
A number of services are balanced over different nodes(machines) in a swarm cluster.A node declared as a ‘manager’ controls the other nodes.Basic ‘docker commands’ works from within a ‘manager’ node.
The above failure happens when the ‘manager’ docker machine is not active; as a result, the new node machine will not be able to join the swarm cluster.
To fix this:
Divya1@Divya:~ $docker-machine ssh worker2
System resources are the key elements of a Puppet code that defines the architecture and manages the configuration of a system infrastructure.
Here is how a resource is written:
resource_type { ‘resource_name’: attribute => value, attribute => value, … }
Example:
user { ‘Jack’: ensure => present, owner => ‘root', group => ‘home’, mode => 0644, shell => ‘/bin/bash’ }
This code evaluates as:
Resource type ‘user’ with the resource parameter ‘Jack’ have the attributes: ‘ensure’, ‘owner’, ‘group’, ‘mode’ and ‘shell’.
These attributes have the respective values.
We can get a list of all the available resource types with the command:
Some of the common resource types are:
Example of resource_type: ‘service’. This resource ensures that the service: ‘network’ is running service
{‘network’ : ensure => running } This resource checks the ‘package’: ‘apache’ is running and its pre-requisite requires ‘apt-update’ command to be executed. package { ‘apache’ : require => Exec[‘apt-update’], ensure => running }
vi /etc/puppet/manifests/lamp.pp
exec { ‘apt-update’ : command => '/usr/bin/apt-get update' } # install apache2 package package { 'apache2': require => exec['apt-update'], ensure => installed, } # ensure apache2 service is running service { 'apache2': ensure => running, } # install mysql-server package package { 'mysql-server': require => exec['apt-update'], ensure => installed, } # ensure mysql service is running service { 'mysql': ensure => running, } # install php5 package package { 'php5': require => exec['apt-update'], ensure => installed, } # ensure info.php file exists file { '/var/www/html/info.php': ensure => file, content => '<?php phpinfo(); ?>', # phpinfo code require => package['apache2'], }
Save and exit.
puppet apply --test
This, along with other DevOps practical interview questions for freshers, is a regular feature in DevOps interviews. Be ready to tackle it with the approach mentioned above.
Jenkins stores the metadata of every project under $WORKSPACE path.
Two projects:
Below code screenshot is for project_next
This access the myProject/logs/db.log file and read it for a pattern :’prod’
Jenkins auto-builds the source code from Git(any VCS) at every check-in; tests the source code and deploy the code in a tomcat environment via docker. Webapp source code is then deployed by tomcat server on a production environment.
Pre-requisite:
Git project structure:
Divya1@Divya:myWeb [master] $ Dockerfile webapp/ WEB-INF/ classes/ lib/ web.xml index.jsp --Dockerfile content: vi Dockerfile FROM tomcat:9.0.1-jre8-alpine ADD ./webapp /usr/local/tomcat/webapps/webapp CMD ["catalina.sh","run"]
Add a new project in Jenkins and track your git project url under SCM section.Have a dockerfile with the instruction to connect to the tomcat docker and deploy the webapp folder.
--Add the build section to ‘execute shell’ as below:
#!/bin/sh echo "Build started..." docker build -t webapp . echo "Deploying webapp to tomcat" docker run -p 8888:8080 webapp echo http://localhost:8888/webapp --Build the project from Jenkins:
Below is the screenshot of the output:
--Click on the link: http://localhost:8888/webapp
This is one of the common yet tricky DevOps interview questions and answers for experienced professionals, so do not miss this one.
Sample code:
Pipeline { agent any stages { stage(‘Build’) { steps { sh ‘./test_suite1 build’ } } Stage(‘Test’) { Steps { sh ‘./test_suite1 test’ } } } post { always { archiveArtifacts ‘build/libs/**/*.jar’ } } }
This gives the artifacts path and the filename
Backup of Jenkins is needed in case of disaster recovery, retrieving old configuration and for auditing.
$JENKINS_HOME folder keeps all the Jenkins metadata.
That includes: build logs, job configs, plugins, plugin configurations etc.
Install the ‘think backup’ plugin in Jenkins and enable the backup from settings tab.We have to specify the backup directory and what we want to backup.
Backup directory: $JENKINS_HOME/backup
Backup files generated with the timestamp in the filenames will be stored under the path we specified.
divya@jenkins backup]$ pwd /var/lib/Jenkins/backup uat@jenkins backup]$ls
FULL-2019-02-4_07-14 FULL-2019-02-11_13-07
It is a good practice to version control (using Git) this back-up and move it to cloud.
Restoring:
Backup files are in the tar+zip format.
Copy these over to another server; unzip and un-tar it on the server.
cd $JENKINS_HOME tar xvfz /backups/Jenkins/backup-project_1.01.tar.gz config.xml jobs/myjob/config.xml …
A staple in DevOps technical interview questions and answers, be prepared to answer this using your hands-on experience.
DevOps is an approach to collaborate the development and operations teams for a better, bug-free continuous delivery and integration of the source code.
DevOps is about automating the entire SDLC (Software Development Life Cycle) process with the implementation of CI/CD practices.
CI/CD are the Continuous integration and continuous deployment methodologies.
Every source code check-in will automatically build and unit test the entire code against a
production like environment and continuously deployed to production environment after it passes its automated tests.
That eliminates the long feedback, bug-fix, and product enhancements loops between every
Release.
Every team takes the accountability of the entire product right from the requirement analysis to documentation to coding, testing in development environments, code deployment and continuous improvements in terms of bugs and feedback from reviewers and the customers.
Considering DevOps to be an ideology towards achieving a quality product, every organization has its own guidelines and approach towards it. Some of the popular tools I have used are:
This is one of the most frequently asked basic DevOps interview questions for freshers in recent times.
Git is a Distributed Version Control System; used to logically store and backup the entire history of how your project source code has developed, keeping a track of every version change of the code.
Git facilitates very flexible and efficient branching and merging of your code with other collaborators.Being distributed git is extremely fast and more reliable as every developer has his own local copy of the entire repository.
Git allows you to undo the mistakes in the source code at different tiers of its architecture namely- Working directory, Staging (Index) area, Local repository, and Remote repository.
Using Git we can always get an older version of our source code and work on it.Git tracks every bit of data as it checksums every file into unique hash codes referring to them via pointers.
To summarize Git is the most efficient and widely used VCS, used by major companies like Linux, Google, Facebook, Microsoft, Twitter, LinkedIn, Netflix, Android, Amazon, IBM, Apple IOS to name a few…
When a project repository is initialized to be a git repository, git stores all its metadata in a hidden folder “.git” under the project root directory.
Git repository is a collection of objects.
Git has 4 types of objects – blobs, trees, tags, and commits.
Every commit creates a new commit object with a unique SHA-1 hash_id.
Each commit object has a pointer reference to the tree object, its parent object, author, committer and the commit message.
Diagram: Single Commit object
To see the commit log message along with the textual diff of the code, run:
git show <commit_id>
Divya1@Divya:initialRepo [master] $git show f9354cb commit f9354cb08d91e80cabafd5b54d466b6055eb2927 Author: divya bhushan <divya_bhushan@hotmail.com> Date: Mon Feb 11 23:39:24 2019 +0100 Add database logs. diff --git a/logs/db.log b/logs/db.log new file mode 100644 index 0000000..f8854b0 --- /dev/null +++ b/logs/db.log @@ -0,0 +1 @@ +database logs
To read a commit object git has ‘git cat-file’ utility.
Divya1@Divya:initialRepo [master] $git cat-file -p f9354cb tree 2a85825b8d20918350cc316513edd9cc289f8349 parent 30760c59d661e129329acfba7e20c899d0d7d199 author divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100 committer divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100 Add database logs.
A tree object is like an OS directory that stores references to other directories and files (blob type).
Divya1@Divya:initialRepo [master] $git cat-file -p 2a85825b8d20918350cc316513edd9cc289f8349 100755 blob 054acd444517ad5a0c1e46d8eff925e061edf46c README.md 040000 tree dfe42cbaf87e6a56b51dab97fc51ecedfc969f39 code 100644 blob e08d4579f39808f3e2830b5da8ac155f87c0621c dockerfile 040000 tree 014e65a65532dc16a6d50e0d153c222a12df4742 logs
Reset Vs Revert
This is one of the most frequently asked DevOps coding interview questions and answers for freshers in recent times.
There are scenarios wherein one would like to merge a quickfix or feature branch with not a huge commit history into another ‘dev’ or ‘uat’ branch and yet maintain a linear history.
A non-fast forward ‘git merge’ would result in a diverged history. Also when one wants the feature merged commits to be the latest commits; ‘git rebase’ is an appropriate way of merging the two branches.
‘git rebase’ replays the commits on the current branch and place them over the tip of the rebased branch.Since it replays the commit ids, rebase rewrites commit objects and create a new object id(SHA-1). Word of caution: Do not use it if the history is on release/production branch and being shared on the central server. Limit the rebase on your local repository only to rebase quickfix or feature branches.
Steps:
Say there is a ‘dev’ branch that needs a quick feature to be added along with the test cases from ‘uat’ branch.
Develop the new feature and make commits in ‘new-feature’ branch.
[dev ] $git checkout -b new-feature [new-feature ] $ git add lib/commonLibrary.sh && git commit -m “Add commonLibrary file” Divya1@Divya:rebase_project [new-feature] $git add lib/commonLibrary.sh && git commit -m 'Add commonLibrary file'Divya1@Divya:rebase_project [new-feature] $git add feature1.txt && git commit -m 'Add feature1.txt' Divya1@Divya:rebase_project [new-feature] $git add feature2.txt && git commit -m 'Add feature2.txt'
[dev] $ git merge uat
Divya1@Divya:rebase_project [dev] $git checkout new-featureDivya1@Divya:rebase_project [new-feature] $git rebase dev First, rewinding head to replay your work on top of it... Applying: Add commonLibrary file Applying: Add feature1.txt Applying: Add feature2.txt
Divya1@Divya:rebase_project [new-feature] $git checkout dev Divya1@Divya:rebase_project [dev] $git merge new-feature Updating 5044e24..3378815 Fast-forward feature1.txt | 1 + feature2.txt | 1 + lib/commonLibrary.sh | 16 ++++++++++++++++ 3 files changed, 18 insertions(+) create mode 100644 feature1.txt create mode 100644 feature2.txt create mode 100644 lib/commonLibrary.sh
this will result in linear history with ‘new-feature’ results being at the top and ‘dev’ commits being pushed later.
Divya1@Divya:rebase_project [dev] $git hist * 3378815 2019-02-14 | Add feature2.txt (HEAD -> dev, new-feature) [divya bhushan] * d3859c5 2019-02-14 | Add feature1.txt [divya bhushan] * 93b76f7 2019-02-14 | Add commonLibrary file [divya bhushan] * 5044e24 2019-02-14 | Merge branch 'uat' into dev [divya bhushan] |\ | * bb13fb0 2019-02-14 | End of uat work. (uat) [divya bhushan] | * 0ab2061 2019-02-14 | Start of uat work. [divya bhushan] * | a96deb1 2019-02-14 | End of dev work. [divya bhushan] * | 817544e 2019-02-14 | Start of dev work. [divya bhushan] |/ * 01ad76b 2019-02-14 | Initial project structure. (tag: v1.0, master) [divya bhushan]
NOTE: ‘dev’ will show a diverged commit history for ‘uat’ merge and a linear history for ‘new-feature’ merge.
Every source code deployment needs to be portable and compatible on every device and environment.
Applications and their run time environment such as libraries and other dependencies like binaries, jar files, configuration files etc.. are bundled up(packaged) in a Container.
Containers as a whole are portable, consistent and compatible with any environment.
In development words, a developer can run its application in any environment: dev, uat, preprod and production without worrying about the run-time dependencies of the application.
A developer writes code instructions to define all the applications and its dependencies in a file called a “Dockerfile”.Dockerfile is used to create a ‘Docker image’ using the ‘docker build <directory>’ command.The build command is run by the docker daemon.
When you run a Docker image “Containers” are created. Containers are runtime instances of a Docker image.
Image credit: docs.docker.com
Expect to come across this, one of the most important DevOps interview questions for experienced professionals in web development, in your next interviews.
--Get docker images from docker hub or your docker repository
docker pull busybox docker pull centos docker pull divyabhushan/myrepo Divya1@Divya:~ $docker pull divyabhushan/myrepo Using default tag: latest latest: Pulling from divyabhushan/myrepo 6cf436f81810: Pull complete 987088a85b96: Pull complete b4624b3efe06: Pull complete d42beb8ded59: Pull complete d08b19d33455: Pull complete 80d9a1d33f81: Pull complete Digest: sha256:c82b4b701af5301cc5d698d963eeed46739e67aff69fd1a5f4ef0aecc4bf7bbf Status: Downloaded newer image for divyabhushan/myrepo:latest
--List the docker images
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/myrepo latest 72a21c221add About an hour ago 88.1MB busybox latest 3a093384ac30 5 weeks ago 1.2MB centos latest 1e1148e4cc2c 2 months ago 202MB
--Create a docker container by running the docker image
--pass a shell argument : `uname -a`
Divya1@Divya:~ $docker run centos uname -a Linux c70fc2da749a 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
--Docker images can be built by reading a dockerfile
--build a new image : ‘newrepo’ with tag:1.0 from the dockerFiles/dockerfile
docker build -t newrepo:1.1 dockerFiles/
--Now create a container from the above image:
--List all the containers
--start the container
--List only the running containers
Puppet is a Configuration Management and deployment tool for administrative tasks. This tool helps in automating the provisioning, configuration, and management of Infrastructure and Systems.
In simple words:
Don't be surprised if this question pops up as one of the top interview questions on DevOps in your next interview.
Entire Server Infrastructure setup configurations are written in terms of codes and re-used on all the Puppet Server agent’s nodes(machines) that are connected via a Puppet master Server.
This is achieved by the use of code snippets called ‘manifests’; that are configuration files for every Server agent node.
Jenkins is a self-contained, open source automation server(tool) for continuous development.
Jenkins aids and automates CI/CD process.
It gets the checked in code from VCS like Git using a ‘git plugin’, build the source code, run test cases in a production-like environment and make the code release ready using ‘deploy’ plugin.
Sample Jenkins file
pipeline { agent { docker { image 'ubuntu:latest' } } stages { stage('build') { steps { sh 'uname -a' } } } }
pipeline { agent { docker { image 'ubuntu:latest' } } stages { stage('build') { steps { sh 'uname -a' } } stage('Test') { steps { sh './jenkins/scripts/test.sh' } } } }
We can mention some test conditions to run post the completion of stages in a pipeline.
Code snippet
post { always { echo “This block runs always !!!” } success { echo “This block runs when the stages has a success status” } unstable { echo “This block is run when the stages abort with an unstable status” } }
Here are the post conditions reserved for jenkinsfile:
Run the steps in the post section regardless of the completion status of the Pipeline’s or stage’s run.
Only run the steps in post if the current Pipeline’s or stage’s run has an "unstable" status, usually caused by test failures, code violations, etc.
Only run the steps in post if the current Pipeline’s or stage’s run has an “aborted” status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "success" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "failed" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a different completion status from its previous run.
Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status.
One of the most frequently posed DevOps scenario based interview questions, be ready for this conceptual question.
Continuous Integration is a development practice wherein developers regularly merge or integrate their code changes into a common shared repository very frequently (*).Every code check-in is then verified by automated build and automated test cases.
This approach helps to detect and fix the bugs early, improve software quality,reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.
By default, git does not allow you to delete a branch whose work has not yet been merged into the main branch.
To see the list of branches not merged with the checked out branch run:
Divya1@Divya:initialRepo [master] $git branch --no-merged dev
--If you try to delete this branch, git displays a warning:
Divya1@Divya:initialRepo [master] $git branch -d dev error: The branch 'dev' is not fully merged. If you are sure you want to delete it, run 'git branch -D dev'.
--If it is still deleted using the -D flag as:
Divya1@Divya:initialRepo [master] $git branch -D dev
--See the references log information
Divya1@Divya:initialRepo [master] $git reflog cb9da2b (HEAD -> master) HEAD@{0}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{1}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{2}: checkout: moving from master to master cb9da2b (HEAD -> master) HEAD@{3}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{4}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{5}: checkout: moving from uat to master 03224ed (uat) HEAD@{6}: checkout: moving from dev to uat
b834dc2 is the commit id when we jumped to ‘dev’ branch
Create a branch named ‘dev’ from this commit id again.
Divya1@Divya:initialRepo [master] $git checkout -b dev b834dc2 Switched to a new branch 'dev' Divya1@Divya:initialRepo [dev]
A good branching strategy is the one that adapts to your project and business needs. Every organization has a set of its own defined SDLC processes.
An example branching structural strategy that I have used in my project:
Guidelines:
All the steps will be mentioned in a Jenkins file on a branch ‘name’ condition.
git log --oneline <localBranch>..<origin/remoteBranch>
Your local git branch should be set up to track a remote branch.
Divya1@Divya:initialRepo [dev] $git branch -vv * dev b834dc2 [origin/dev] Add Jenkinsfile master b834dc2 [origin/master] Add Jenkinsfile
Reset ‘dev’ commit history to 3 commits behind using the command:
Divya1@Divya:initialRepo [dev] $git reset --soft HEAD~3 Divya1@Divya:initialRepo [dev] $git branch -vv * dev 30760c5 [origin/dev: behind 3] add source code auto build at every code checkin using docker images
Compare and list the missing logs in local ‘dev’ branch that are present in ‘origin/dev’
Divya1@Divya:initialRepo [dev] $git log --oneline dev..origin/dev b834dc2 (origin/master, origin/dev, master) Add Jenkinsfile c5e476c Rename 'prod' to 'uat'-break the build in Jenkings 6770b16 Add database logs.
Use ‘git pull’ to sync local ‘dev’ branch with the remote ‘origin/dev’ branch.
Git hooks are the instruction scripts that gets triggered before(pre) or post(after) certain actions or events such as a git command run.
#!/bin/sh #Library includes: . .git/hooks/hooks_library.lib # An example hook script to verify what is about to be committed. # Called by "git commit" with no arguments. The hook should # exit with non-zero status after issuing an appropriate message if # it wants to stop the commit. #Aim:Check for any Deleted file in the staging area, if any it stops you from commiting this snapshot. set_variables 1 $0 if [ "$(git status --short | grep '^D')" ];then echo "WARNING!!! Aborting the commit. Found Deleted files in the Staging area.\n" | tee -a $LOGFILE echo "`git status --short | grep '^D' | awk -F' ' '{print $2}'`\n" | tee -a $LOGFILE exit 1; else echo "[OK]: No deleted files, proceed to commit." | tee -a $LOGFILE exit 0; fi
Scenario how I implemented the hooks scripts to enforce certain pre-commit and post-commit test cases:
Step 1: Running .git/hooks/pre-commit script.
[OK]: No deleted files, proceed to commit. Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 2: Running .git/hooks/prepare-commit-msg script.
Get hooks scripts while cloning the repo. ISSUE#7092 Enter your commit message here. README code/install_hooks.sh code/runTests.sh database.log hooksScripts/commit-msg hooksScripts/hooks_library.lib hooksScripts/post-commit hooksScripts/pre-commit hooksScripts/pre-rebase hooksScripts/prepare-commit-msg newFile Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 3: Running .git/hooks/commit-msg script.
[OK]: Commit message has an ISSUE number Thu Feb 7 12:10:02 CET 2019 --------------------------------------------
Step 4: Running .git/hooks/post-commit script.
New commit made:
1c705d3 Get hooks scripts while cloning the repo. ISSUE#7092
hooksProj [dev] $git rebase master topic WARNING!!! upstream branch is master. You are not allowed to rebase on master The pre-rebase hook refused to rebase.
A code snippet demonstrating the use of a ‘pre-receive’ hook that is triggered just before a ‘push’ request on the Server, can be written to reject or allow the push operation.
localRepo [dev] $git push Enumerating objects: 3, done. Counting objects: 100% (3/3), done. Writing objects: 100% (2/2), 272 bytes | 272.00 KiB/s, done. Total 2 (delta 0), reused 0 (delta 0) remote: pre-recieve hook script remote: hooks/pre-receive: [NOK]- Abort the push command remote: To /Users/Divya1/OneDrive/gitRepos/remoteRepo/ ! [remote rejected] dev -> dev (pre-receive hook declined) error: failed to push some refs to '/Users/Divya1/OneDrive/gitRepos/remoteRepo/'
Install a new package in a container
docker run -it ubuntu root@851edd8fd83a:/# which yum --returns nothing root@851edd8fd83a:/# apt-get update root@851edd8fd83a:/# apt-get install -y yum root@851edd8fd83a:/# which yum /usr/bin/yum --Get the latest container id CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 851edd8fd83a ubuntu "/bin/bash" 6 minutes ago Exited (127) 3 minutes ago --base image changed docker diff 851edd8fd83a
Commit the changes in the container to create a new image.
Divya1@Divya:~ $docker commit 851edd8fd83a mydocker/ubuntu_yum sha256:630004da00cf8f0b8b074942caa0437034b0b6764d537a3a20dd87c5d7b25179
--List if the new image is listed
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker/ubuntu_yum latest 630004da00cf 20 seconds ago 256MB
FROM divyabhushan/myrepo:latest COPY hello.sh /home/hello.sh CMD ["bash", "/home/hello.sh"] CMD ["echo", "Dockerfile demo"] RUN echo "dockerfile demo" >> logfile
--Build an image from the dockerfile, tag the image name as ‘mydocker’
docker build -t mydocker dockerFiles/ docker build --tag <containerName> <dockerfile location> Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker latest aacc2e8eb26a 20 seconds ago 88.1MB Divya1@Divya:~ $docker run mydocker /home/divya Hello Divya Bye Divya
Write instructions in a dockerfile.
docker build -t learn_docker dockerFiles/
docker run -it learn_docker
--Tag the local image as:
<hub-user>/<repo-name>:[:<tag>]
Examples:
docker tag learn_docker divyabhushan/learn_docker:dev docker tag learn_docker divyabhushan/learn_docker:testing
--list the images for this container:
Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/learn_docker develop 944b0a5d82a9 About a minute ago 88.1MB learn_docker dev1.1 944b0a5d82a9 About a minute ago 88.1MB divyabhushan/learn_docker dev d3e93b033af2 16 minutes ago 88.1MB divyabhushan/learn_docker testing d3e93b033af2 16 minutes ago 88.1MB Push the docker images to docker hub docker push divyabhushan/learn_docker:dev docker push divyabhushan/learn_docker:develop docker push divyabhushan/learn_docker:testing The push refers to repository [docker.io/divyabhushan/ learn_docker] 53ea43c3bcf4: Pushed 4b7d93055d87: Pushed 663e8522d78b: Pushed 283fb404ea94: Pushed bebe7ce6215a: Pushed latest: digest: sha256:ba05e9e13111b0f85858f9a3f2d3dc0d6b743db78880270524e799142664ffc6 size: 1362
Summarize:
Develop your application code and all other dependencies like the binaries, library files, downloadables required to run the application in the test environment. Bundle it all in a directory.
NOTE: This docker image has your application bundle = application code + dependencies + test run time environment exactly similar to your machine. Your application bundle is highly portable with no hassles.
A must-know for anyone looking for top DevOps coding interview questions, this is one of the frequently asked CI CD interview questions.
Docker provides a system prune command to remove stopped containers and dangling images.Dangling images are the ones which are not attached to any container.
Run the prune command as below:
docker system prune
WARNING! This will remove:
Are you sure you want to continue? [y/N]
There is also a better and controlled way of removing containers and images using the command:
Step 1: Stop the containers
docker stop <container_id>
Step 2: Remove the stopped container
docker rm container_id docker rm 6174664de09d
Step 3: Remove the images, first stop the container using those images and then
docker rmi <image_name>:[<tag>]
--give image name and tag
docker rmi ubuntu:1.0
--give the image id
docker rmi 4431b2a715f3
As the number of docker machines increases, there needs to be a system to manage them all. Docker Orchestration is a virtual docker manager and allows us to start, stop, pause, unpause or kill the docker nodes(machines).
Docker has an in-built utility called “docker swarm”.
Kubernetes is another popular and versatile docker orchestration system used. A cluster of dockers is called a ‘swarm’. Swarm turns a collection of docker engines into a single virtual docker engine.
In a swarm orchestration arrangement, one machine acts as a swarm manager that controls all the other machines connected to the cluster that acts as swarm nodes.
This is how I created a swarm of dockers and managed them on my machine:
We need docker services and docker machines to run these services on.Finally, we need a docker swarm to manage the docker nodes/machines
Create a docker swarm and manage the services on different nodes and port numbers.
Step 1: Create docker machines: manager, node1, node2, node3, node4
docker-machine create --driver virtualbox manager
docker-machine create --driver virtualbox node1 docker-machine create --driver virtualbox node2 docker-machine create --driver virtualbox node3 docker-machine create --driver virtualbox node4 --Every node is started as a virtualbox machine. --set docker machine ‘manager’ as active eval $(docker-machine env manager) --List the docker machines
Step 2: Create a docker swarm
--Initialize a swarm and add ‘manager’ to the swarm cluster using its ip address: 192.168.99.100
Step 3: Add the nodes as workers(or another manager) to the swarm
--Connect to each node and run the above swarm join command
There can be more than one ‘manager’ node in a swarm
--connect to node1 and join node1 to the swarm as a worker
docker-machine ssh node1
--List the nodes connected in the swarm
connect to manager node:
$docker-machine ssh manager
Step 4: From the ‘manager’ node create new docker services
docker-machine ssh manager
--Create service replicating them on more than 1 nodes and expose them on the mentioned port.
This command pulls the docker image from docker hub.
Step 5: list the docker services created also use -ps flag to view the node machines these services are running on.
--List the services that will be shared among different swarm nodes
Swarm randomly assigns nodes to the running services when we replicate the services.
--service ‘httpd’ running on 3 nodes: node1, node2 and node3
--service ‘couchbase’ is running on 2 nodes: node1 and manager at port: 8091
--’couchbase’ service can be accessed via ‘node1’ (ip: 192.168.99.101) and ‘manager’ (ip: 192.168.99.100) at port : 8091 as show below
Screenshots of the running services:
‘manager’ node can create/inspect/list/scale or remove a service.
Refer
docker service --help
Conclusion:
A number of services are balanced over different nodes(machines) in a swarm cluster.A node declared as a ‘manager’ controls the other nodes.Basic ‘docker commands’ works from within a ‘manager’ node.
The above failure happens when the ‘manager’ docker machine is not active; as a result, the new node machine will not be able to join the swarm cluster.
To fix this:
Divya1@Divya:~ $docker-machine ssh worker2
System resources are the key elements of a Puppet code that defines the architecture and manages the configuration of a system infrastructure.
Here is how a resource is written:
resource_type { ‘resource_name’: attribute => value, attribute => value, … }
Example:
user { ‘Jack’: ensure => present, owner => ‘root', group => ‘home’, mode => 0644, shell => ‘/bin/bash’ }
This code evaluates as:
Resource type ‘user’ with the resource parameter ‘Jack’ have the attributes: ‘ensure’, ‘owner’, ‘group’, ‘mode’ and ‘shell’.
These attributes have the respective values.
We can get a list of all the available resource types with the command:
Some of the common resource types are:
Example of resource_type: ‘service’. This resource ensures that the service: ‘network’ is running service
{‘network’ : ensure => running } This resource checks the ‘package’: ‘apache’ is running and its pre-requisite requires ‘apt-update’ command to be executed. package { ‘apache’ : require => Exec[‘apt-update’], ensure => running }
vi /etc/puppet/manifests/lamp.pp
exec { ‘apt-update’ : command => '/usr/bin/apt-get update' } # install apache2 package package { 'apache2': require => exec['apt-update'], ensure => installed, } # ensure apache2 service is running service { 'apache2': ensure => running, } # install mysql-server package package { 'mysql-server': require => exec['apt-update'], ensure => installed, } # ensure mysql service is running service { 'mysql': ensure => running, } # install php5 package package { 'php5': require => exec['apt-update'], ensure => installed, } # ensure info.php file exists file { '/var/www/html/info.php': ensure => file, content => '<?php phpinfo(); ?>', # phpinfo code require => package['apache2'], }
Save and exit.
puppet apply --test
This, along with other DevOps practical interview questions for freshers, is a regular feature in DevOps interviews. Be ready to tackle it with the approach mentioned above.
Jenkins stores the metadata of every project under $WORKSPACE path.
Two projects:
Below code screenshot is for project_next
This access the myProject/logs/db.log file and read it for a pattern :’prod’
Jenkins auto-builds the source code from Git(any VCS) at every check-in; tests the source code and deploy the code in a tomcat environment via docker. Webapp source code is then deployed by tomcat server on a production environment.
Pre-requisite:
Git project structure:
Divya1@Divya:myWeb [master] $ Dockerfile webapp/ WEB-INF/ classes/ lib/ web.xml index.jsp --Dockerfile content: vi Dockerfile FROM tomcat:9.0.1-jre8-alpine ADD ./webapp /usr/local/tomcat/webapps/webapp CMD ["catalina.sh","run"]
Add a new project in Jenkins and track your git project url under SCM section.Have a dockerfile with the instruction to connect to the tomcat docker and deploy the webapp folder.
--Add the build section to ‘execute shell’ as below:
#!/bin/sh echo "Build started..." docker build -t webapp . echo "Deploying webapp to tomcat" docker run -p 8888:8080 webapp echo http://localhost:8888/webapp --Build the project from Jenkins:
Below is the screenshot of the output:
--Click on the link: http://localhost:8888/webapp
This is one of the common yet tricky DevOps interview questions and answers for experienced professionals, so do not miss this one.
Sample code:
Pipeline { agent any stages { stage(‘Build’) { steps { sh ‘./test_suite1 build’ } } Stage(‘Test’) { Steps { sh ‘./test_suite1 test’ } } } post { always { archiveArtifacts ‘build/libs/**/*.jar’ } } }
This gives the artifacts path and the filename
Backup of Jenkins is needed in case of disaster recovery, retrieving old configuration and for auditing.
$JENKINS_HOME folder keeps all the Jenkins metadata.
That includes: build logs, job configs, plugins, plugin configurations etc.
Install the ‘think backup’ plugin in Jenkins and enable the backup from settings tab.We have to specify the backup directory and what we want to backup.
Backup directory: $JENKINS_HOME/backup
Backup files generated with the timestamp in the filenames will be stored under the path we specified.
divya@jenkins backup]$ pwd /var/lib/Jenkins/backup uat@jenkins backup]$ls
FULL-2019-02-4_07-14 FULL-2019-02-11_13-07
It is a good practice to version control (using Git) this back-up and move it to cloud.
Restoring:
Backup files are in the tar+zip format.
Copy these over to another server; unzip and un-tar it on the server.
cd $JENKINS_HOME tar xvfz /backups/Jenkins/backup-project_1.01.tar.gz config.xml jobs/myjob/config.xml …
A staple in DevOps technical interview questions and answers, be prepared to answer this using your hands-on experience.
DevOps describes a culture and set of processes that bring operations and development teams collectively to build software development. It empowers organizations to create and improve various products at a faster speed than they can with traditional software development approaches. Enterprises also prefer professionals having passed DevOps certification courses for a certain level of certainty.
DevOps is the leading important course in the present situation because more job openings and the high salary pay for this DevOps and more related jobs. Today, the demand for DevOps engineers in the market is increasing enormously. Among the many career avenues opening up in the IT industry every day, DevOps has emerged as one of the most coveted and sustainable career choices.
To land your dream DevOps job you are at the right place. Interview questions on Devops here will help you in cracking your DevOps interview & pursue a dream career as a DevOps Engineer. As well as help you to test your understanding of DevOps. These interview questions for devops are suitable for both freshers and experienced professionals at any level. The questions are for intermediate to somewhat advanced Devops Professionals, but even if you are just a beginner or fresher you will easily understand the answers and explanations. The average pay for a Development Operations (Devops) Engineer is INR 624,339 per year.
Interview questions on DevOps are prepared by industry experienced trainers. If you wish to learn more on DevOps you can also take up training on DevOps which will help you to master.
Here is the list of most frequently asked DevOps Interview Questions and answers in technical interviews. The DevOps technical interview questions here are for intermediate to somewhat advanced Devops Professionals, but even if you are just a beginner or fresher you should be able to understand the answers and explanations for the interview questions for devops here we give.
We hope these Devops Interview Questions and answers are useful and will help you to get the best job in the networking industry. Be thorough with these DevOps interview questions and take your expertise to the next level.
Submitted questions and answers are subjecct to review and editing,and may or may not be selected for posting, at the sole discretion of Knowledgehut.
Get a 1:1 Mentorship call with our Career Advisor
By tapping submit, you agree to KnowledgeHut Privacy Policy and Terms & Conditions