upGrad KnowledgeHut SkillFest Sale!

DevOps Engineer Interview Questions and Answers for 2024

DevOps engineers have seen a massive spike in the job openings over the past few years that is because many organizations - like Netflix, Google, and Amazon use DevOps to increase the productivity and efficiency of their teams and want to hire the DevOps professionals to implement the DevOps lifecycle within their workflows so as to increase the efficiency, speed, security of software development and the delivery compared to the traditional processes which later on could results in a competitive advantage for businesses and their customers. If you’ve started to prepare for the development and operations roles in the IT industry, you must have known it’s a challenging field that will take you through some real preparation to break into. There are various roles and responsibilities that are available in the market for DevOps professionals such as DevOps Engineer, Platform Engineer, Building Engineer, Platform Engineer, etc. We have tried to add the most asked and the most common DevOps interview questions and answers that can help you while you prepare for DevOps roles in the industry.

  • 4.7 Rating
  • 48 Question(s)
  • 25 Mins of Read
  • 7740 Reader(s)

Beginner

A DevOps engineer is an information technology professional who works with the developers and the IT operations team to ensure the stable production environment, smooth code releases, application availability, software implementation, development, and maintenance across the clock. 

For DevOps engineer to succeed and perform well in role it is important for them to must have a deep understanding of both development (with the understanding of the fundamentals of whatever coding language is used) and operational processes which includes the administration of the organisation network and servers that host the application which is being created. Along with this there are some other responsibilities such as creating accounts, troubleshooting, updating permissions, and ensuring that everything is regularly backed up. 

Not only good technical skills but DevOps engineer should also be a good team player with flexible nature. Because generally it requires to work irregular hours and stay on call a bit longer to resolve any production issues or bugs. 

 It’s also important for a DevOps engineer to have a solid understanding of SDLC (Software Development Lifecycle) and all its components for delivery pipeline to automate CI/CD pipelines as much as possible. 

SSH stands for Secure Shell which is an administrative protocol that provide encrypted connection between two host and let users have control for the remote servers or systems over the Internet and work using the command line.

SSH is a secured encrypted version that runs on TCP/IP port 22 that has a mechanism for remote user authentication, input communication between the client and the host and sending the output back to the client in an encrypted form.

Below are the steps that Developer can follow while pushing a file from their local system to the GitHub repository using Git cli: 

a. Initialize Git in the project folder: We can run the command after navigating to folder which we want to push to GitHub: 

Git init 

This will create a hidden. Git directory in folder which helps Git to recognize and store the metadata and version history for project. 

b. Add Git files:

This command will tell Git about which files to include in commit. 

Git add -A  

We can use option -A or --all which refer to include all files. 

c. Commit Added Files:

Git commit -m 'Add Project' 

d. Adding a New Remote Origin:

Here remote refers to the remote version of working repository and "origin" is the default name given to remote server by Git. 

Git remote add origin [copied web address]       

e. Push to GitHub

This will push the file to the remote repository: 

Git push origin master 

KPIs stands for Key Performance Indicators. There are many DevOps KPI that are essential for lifecycle, but few of them are given below: 

Time to Detection: This KPI will check the time required in the detection of failures or issues. Faster the detection of issues and bug, more it will be helpful in maintaining the security so as to have least downtime or user impact. 

Increase frequency of the deployment which can lead to agility and faster compliance with the changing needs of users. 

Reduced failed deployments rate refers to the number of deployments which result in outages or other issues. 

Mean Time to recovery is used to measure time period between service being down till it becomes Up and running. 

Application Performance: -This KPI is important to keep check on the performance before end-users faces the performance issues and reports the bugs. 

Service Level Agreement Compliance: Service should be having high availability and uptime as high as 99.999%, since it's one of most crucial parameters for any organisation 

Forking Workflow is different from GIT workflow in the way that Git workflow uses single server-side repository and act as ‘central’ codebase whereas forking workflow provides every developer its own server-side repositories. Forking Workflow is seen implemented in public open-source projects where it provides the advantage of contribution which later can be integrated without everyone pushing code to single central repository. The only access to pushing the code to official repository is with project maintainer.

There are generally two different syntax to write a pipeline. While writing the code for pipeline DevOps Engineer can choose between two different ways for pipeline i.e., Declarative and Scripted Pipeline. 

The major difference between declarative and scripted pipeline is based on syntax, flexibility and its ease in term of usage 

  • Declarative pipeline is new feature added to create the pipeline which support the pipeline as code. Due to which writing and reading code become easier.

Here code is written in the Jenkins File which can be checked into Git. 

Scripted pipeline is older way of writing code as compared with declarative. Here Jenkin file is written on Jenkins UI instance. 

  • Scripted Pipeline uses strict groovy based syntax, which was not desirable for all users whereas Declarative Pipeline offers simpler and more optioned Groovy syntax
  • Declarative Pipeline encourages a declarative programming model, whereas Scripted Pipelines has a more imperative programming model.
  • Declarative pipeline is defined in the block labelled pipeline and Scripted is defined in the 'node'
  • Declarative Pipeline has more strict and pre-defined structure, which perform in ideal way for simpler continuous delivery pipelines. 

Scripted Pipeline type has very few limitations that to with respect to structure and syntax that tend to be defined by Groovy. 

Therefore, more suitable to use under complex requirement. 

Structure and syntax of the Declarative pipeline: 

The Agent is where the whole pipeline runs. Example, Docker. The Agent has following parameters: 

  • any – It consist of the whole pipeline will run on any available agent.
  • none – Here it means all the stages under the block will have to declared with agent separately.
  • label – this is a label for the Jenkins environment
  • docker – this is to run the pipeline in Docker environment
pipeline { 
agent { label 'node-1' } 
stages { 
stage('Source') {  
steps { 
Git '<Git url>' 
} 
} 
stage('Compile') {  
tools { 
gradle 'gradle4' 
} 
steps { 
sh 'gradle clean python test' 
} 
} 
} 
} 

Structure and syntax of the Scripted Pipeline: 

Node Block: 

Node agent will run the workload jobs and the part of the Jenkins architecture 

node { 
} 

Stage Block: 

Stage block consist of task which can be a single or multiple depending upon the project. The common stages in pipeline are: 

  • Cloning the code from SCM
  • Building the project
  • Running the Unit Test cases
  • Deploying the code
  • Other functional and performance tests.
stage { 
} 

Overall the Scripted Pipeline will be as below: 

node ('node-1') { 
stage('Source') { 
Git '<Git_url>' 
} 
stage('Compile') { 
def gradle_home = tool 'gradle4' 
sh "'${gradle_home}/bin/gradle' clean python test" 
} 
} 

To start with the implementation of DevOps in project, first we need to check on the approach for implementing DevOps in the project and require the understanding of a few concepts which can be like: 

  • Programming language (JAVA, Python, Angular) concerning the project. 
  • Details about Operating system e.g., Memory, I/O, Disk Management and Network security for better management. 
  • Understanding the version control, continuous integration, testing, continuous deployment, continuous delivery and monitoring.  
  • Idea about the basic of DevOps i.e., continuous integration, continuous development, continuous delivery, continuous deployment, monitoring, and its tools used in various phases. 
  • Interact with other teams and design a roadmap for the process. And once design along with proof of concept is ready, we can start work accordingly. 

The difference between continuous testing and automation testing is given below: 

Continuous Testing
Automation Testing

Continuous Testing is the process of executing all the automated test case as part of the Software delivery Lifecycle so that continuous feedback for software release can be obtained. 

Automation testing is the process where software has predefined steps for using test script which is run to review and validate an application. 

Continuous testing is more concerned and focus on business & KPI risk and provide insight for software good to release or not in the production. 

This is generally designed to show the result in term of pass or fail related to the application requirements. 

The prime focus is to remove or identify bug that have been developed in the last cycle. 

To maximize the security, Jenkins has the credentials plug that provides a default internal credentials store, which can be used to store different types of high value credentials example- Username with a password, SSH username with the private key, AWS bucket deployment username/password combination and Git user token, Jenkins Build Token, Secret File/Text.

Jenkins credentials plugin is better than other alternative in case of configuration and patching done correctly.

Multi branch Pipeline can be configured to create a set of Pipeline projects according to the detected each and every branch in mentioned SCM repository with the help of Jenkins file. This can be used to implement different Jenkins file for the different branches of the same project

As per Jenkins official document, In multi branch pipeline project, Jenkins automatically discover manages and executes Pipeline for branches which contain a Jenkins file in source code

Docker Volumes are file systems that are mounted on the Docker container to preserve data generated at the time of running the container. These are widely used to ensure data persistence while working with the containers 

Reason for using Docker Volumes: - 

  1. Earlier, data doesn't used to persist when container was no longer existing and it used to become difficult to get data out of the container, especially if we require data in another process. For the same, Volumes came as rescue since it is easier to take back up while data is there on Docker volumes.
  2. We can manage the Docker volume using Docker CLI command, Docker API and it can work on both Linux and Windows.
  3. Container has a writable layer that is tightly coupled to the host machine, which causes data move ability more difficult. With the help of Volumes migrating the data becomes easier.
  4. With the help of Volume drivers, we can store volume on the remote host or cloud providers, which can be beneficial in encrypting the content of volume or adding other functionalities.

There are two ways to mount the Docker volume while launching a container, in which we can choose between - v and the -- mount that can be added after Docker run command. 

The -v or --volume flag used for standalone containers, and the --mounted was used for Swarn services. 

The major difference is that -v syntax combines all the option in one field, whereas --mount syntax separates them. 

Command to create Docker Volume: - 

$ docker volume create [volume_name] 

Docker automatically creates a directory for volume on host under /var/lib/docker/volume/path  

Now we can mount this volume on the container, so that data persistence and data sharing can be ensured on multiple containers. 

Few other important Docker Volume commands: - 

To list the Volume  

$docker volume list  

To inspect the volume  

$docker volume inspect [volume_name] 

Mount a data Volume: - 

To mount data volume on container, add --mount flag in docker run command. 

This will add volume to specified container  

$docker run --mount source=[volume_name],destination=[path_of_container] [docker_image] 

If you are a DevOps engineer then you might have heard of the terms SLI, SLA, and 

SLO. These are being used as part of the daily job. 

An optimum level of performance is very important for software which can be monitored by setting thresholds or targets and if in the case of software, latency reaches or exceeds these thresholds then it may impact the consumer experience. Application metrics like latency, uptime, throughput, error count and mean recovery time, etc are monitored by DevOps engineers or SREs by setting benchmarks for these metrics. SLI, SLA, and SLO come into the picture for these tasks. Let us briefly explain each term one by one below. 

  • Service Level Indicator (SLI): 

Service behaviour can be directly measured with the help Service Level Indicator. Performance metrics like latency and the error rate which indicate overall performance can be easily measured with SLI. It can be easily understood with the help of an example like in a span of one month, how many successful requests are there out of the total requests received by an application. Let us suppose there is only 1 failed request out of 100k requests received then the SLI would be 99.99% uptime. 

  • Service Level Agreement (SLA): 

SLA as its name says acts as an agreement between the service provider and the service receiver(user). SLAs commit to the application availability and can include responsiveness to bugs and incidents. SLA breakage leads to penalties which can be in the form of service subscription credit or refund. In Today’s world, most business uses cloud-based Paas, IaaS, and SaaS. We can take an example of an online payment that may require hitting remote servers multiple times. And if in case the online services go down suddenly then it will halt that transaction. So, SLA needs to be implemented to ensure application availability with minimum failed requests so that specific standards can be maintained between both parties. 

As the application behaviour can be easily tracked by SLI, an engineer can set an SLA after getting an idea of the behaviour. SLA sets an objective of application availability like not more than 10 failed requests out of 100k total requests to ensure 99.999% availability and if failed requests exceed this threshold, then it may lead to a penalty according to the contract signed. 

  • Service Level Objective (SLO): 

SLO (Service Level Objective) as its name suggests is used to set a performance objective or target similar to SLA. SLOs are implemented to avoid breakage of SLAs to ensure users will never report any performance issues. We can take the example of SLA ensuring 99.999% availability i.e., 10 failed requests out of 100k total requests. To avoid breakage of this SLA, we can set an SLO of 99.995% availability i.e., 5 failed attempts out of 100k total requests and if the application failed to meet this objective, then some improvement needs to be done in application design and search for loopholes in the system which is impacting performance.

DevOps as its name suggests is made from a combination of Development and Operations. From Planning to the Final product, this process is followed by the Developers and Operational team together. There are different phases under the DevOps lifecycle. These are defined as continuous development, continuous testing, continuous integration, continuous deployment, continuous monitoring, feedback, and continuous operations We will briefly explain all the above-mentioned phases below. 

  • Continuous Development

This is the first phase of the DevOps lifecycle and it involves both planning and coding of the software. This phase decides the vision of the project to understand it completely on a higher level. This phase not only helps to get financial and management level knowledge but also helps to get insights from both Operations and development teams. There is no specific tool of DevOps that is required in this phase but any feasible tool can be used. During the application coding, any SCM (Source Code Manager) tool can be used to store the code like Git, etc. Code versioning is also possible and developers can modify the saved code and can create multiple versions of code. There is a privilege provided to use any type of language for coding according to the projection requirements. After code check-in on the SCM tool, any packaging tool can be used like maven, etc to package the code into an executable format which can then be moved to the next phase. 

  • Continuous Testing

This phase is used to test the application continuously to detect any bugs or issues at an early stage. Automation testing tools like Selenium, TestNG, etc. are preferred for testing applications rather than manual testing to save time. Jenkins can be used to automate the entire testing phase where the Selenium tool will test the code automatically and TestNG will generate corresponding test reports. The primary focus of the phase is to enhance the quality of code by testing it at an early stage and then integrating it continuously with the existing source code. 

  • Continuous Integration

In the DevOps lifecycle, this stage act as the heart. This phase involves the integration of new code with the source code, Unit, Integration testing, and packaging of the code to move it to the production server. When developers commit their new code to the source code in SCM tool like Git, then this updated code is built and tested to detect any problem at an early stage. Tools like Jenkins or circle CI can be used for continuous integration. It will detect changes made in the source code in Git and it will get the updated code, test it, and package it into an executable format(.jar) to move it to the next phase. 

  • Continuous Monitoring

This phase also known as Continuous Control Monitoring (CCM) is a very crucial stage that is used to monitor operational parameters of application usage for the identification of any problem areas or any trend. Tools like Nagios, Splunk, etc can be used for continuous monitoring. Monitoring can be done in the form of documentation or collecting large-scale data from an application while it is under continuous use. This phase helps to maintain service availability and security. Server unavailability issues or memory issues are rectified in this phase. 

  • Continuous Deployment

This phase as its name suggests helps to deploy the code to the production servers. As the new code is deployed continuously, it is very important to make sure that code consistency is there in all environments for which containerization tools like docker can be used to maintain consistency across development, staging, testing, and production environment. By doing so, there will be no chance of any error or failure in the production environment as these tools replicates the same packages and dependencies from lower to production environments. 

  • Continuous Operations

We know that in DevOps, continuity is the key factor to automating all the processes from planning to product release. This phase focuses on automating the application or software release and its corresponding regular new updates to the end users that help organizations to release their products to the market without any delay and it also helps the developers to focus more on code development than focussing on release management. 

  • Continuous Feedback

We know that feedback is very crucial after every software release to make required improvements in the next release. As continuity is key in DevOps to save time and remove manual work, continuous feedback should be placed between the Operations and the Development phase to get continuous feedback from application operations, finding out issues in the current release and releasing a better version in the future based on the feedback. 

We can use Docker Compose to coordinate multiple containers with configurations. With Docker Compose, we will need only one file docker-compose.yaml where we can define everything about build-time, run-time and one command docker-compose up. 

Below is sample for using docker with multiple environments for node: 

FROM node:8-alpine 
WORKDIR/usr/src/your-app 
COPY package*.json./ 
RUN if["$NODE_ENV"="development"]; \ 
Then npminstall; \ 
Else npminstall--only=production; \ 
fi 
COPY. 
Development command: 
docker-compose-f docker-compose.yml-f docker-compose.dev.yml up 
Production command: 
docker-compose-f docker-compose.yml-f docker-compose.prod.yml up 

Git stash is used when we are currently working on some project, but let say due to some production issue or any scenario we want to go back to some other directory. In that case, we can do git stash which save our uncommitted changes and can be used later.

When we are done with the stashed item and want to remove it from the list, we can run the git ‘stash drop’ command. It will remove the last added stash item by default, and in case if we want to remove a specific item, that can be included as an argument.

Git squash is used to squash previous multiple commits into one, where we can merge several commits into a single commit with the compelling interactive rebase command. It allows developer to simplify the Git tree by merging sequential commit into one another. 

We have an option to squash using IDE as well such as IntelliJ, Eclipse where it has integrated support for Git operation and allows us to squash commit from GUI. 

To squash the last n commits into a single commit using Git, we can use the below: 

Note: Squash is not command, it’s Git operation. 

Git rebase – i HEAD ~n 
E.g.- Git rebase – i HEAD~10 
Git commit 

Here just to note we have few drawbacks as well for squashing commits, and rebasing changes in the history of the repository.  

One of the issues could be if updated history is not checked, then it may lead to conflict. To avoid same, it is suggested to clean history. 

There is another drawback i.e., we can lose granularity of commits because of squashing.  

Branching is one of the most the important concept in version control system. Some of the most commonly used branching types are Master, Hotfix, Staging, Feature branching, Task branching, Release branching etc.  

Let's discuss the need for branching first before proceeding to its type: - 

  1. Team can work on parallel development, increasing and optimizing the overall Productivity of team
  2. Through branching strategy, it helps in planned and the structured release management of project. 
  3. It also helps in releasing multiple versions of release and code 

Details are as follow: -  

  1. Master: - This is the main branch where all changes get merged at end and contains the latest code of repository which has the stability to go live in the production environment.

As a DevOps engineer for best practice, the direct access to merge in master should be restricted. And prefer to merge code into this branch through CI/CD pipeline.  

  1. Hotfix: - This branching is created to deal with the production issues where faster fix is required.

Hotfix branch helps to prepare for release, but being more focused on specific bug fix. Once changes are done, it is merged into both master and develop branch. 

  1. Staging branch: This is repository maintained for QA environment and eventually deployed to QA server.
  2. Feature branching: This branching type used to develop new features and when a feature is fully tested and validated by automated test, the branch is then merged into master. 
  3. Task branching:  This is also known as issue branching, which directly connects the issues with the source code. Here in Task branching, each and every issue is applied on its own branch with the issue key included in the branch name. It's easy to see which code implements which task or issue by just looking for the issue key in the branch name.
  4. Release branching: This branching type used to collect fixes and improvements to prepare for the production release. Once the developer branch is completed for feature required to be added for a release, it is then clone from the developer branch to form a release branch. 

Once the changes are ready to ship, the release branch get merged into develop and master branch, after getting it tagged with the release version number from main branch.  

Rest branching strategies vary from company to company based on their requirements and strategies. 

The main difference between Continuous Deployment and Continuous Delivery are given below: 

Continuous Deployment
Continuous Delivery

Continuous Deployment is approach where prime goal is to develop software in as much as shorter span. 

The goal behind continuous software Delivery is to get implemented the changes of new feature, configuration and bug fix in application. 

It refers to implementation of source code in an automated way. 

Continuous Delivery is evolved from Continuous Integration. 

This is the fully automated process and does not require any manual or human intervention. 

In Continuous Delivery, manual intervention is required at some stages for manager's approval, when code changes are deployed to production environment. 

The prime focus here is to make sure that deployment is completed quickly and reliably by eliminating manual intervention and automate the entire process. 

While coming to benefits-it ensures the frequent release and allows quicker and faster response to tickets. 

It helps to make more stable, reliable and controllable.  

Continuous Deployment is suitable for organization who want to release new feature on regular basis-being it daily or even hourly. 

Continuous Delivery is suitable in case for organization that wants to stage new feature and release on particular schedule.  

Intermediate

For handling different machines which require different user account to log in we can set up inventory variables in the inventory file. 

For example, below hosts have different usernames and ports: 

If we want to automate the password input in playbook, we can create a password file which can store all the passwords for encrypted file and call will be made by ansible to fetch those when required. 

ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q testuser@gateway.example.com"' 

Another way is we can have separate script which contains the password. But at time of calling, print password will be required to stdout for seem less working. 

ansible-playbook launch.yml --vault-password-file ~/ .vault_pass.py 

We can copy Jenkins jobs from one server to another by using below steps: 

  1. Move a job from one installation of Jenkins to another by simply copying the corresponding job directory.
  2. Make a copy of an existing job by making a clone of a job directory by a different name.
  3. Rename an existing job by renaming a directory. Note that the if you change a job name you will need to change any other job that tries to call the renamed job.

Those operations can be done even when Jenkins is running. 

 For changes like these to take effect, you have to click "reload config" to force Jenkins to reload configuration from the disk. 

Reference- https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins  

EBS and EFS are both faster as compared to Amazon S3, due to high IOPS and lower latency.

EBS can be scaled up or down with single API call. Since EBS is cheaper than EFS, it can be used for database backup and low latency interactive application that require consistent, predictable, performance.

We can recover branch that has already pushed changes in the central repository but has been accidentally deleted by checking out the latest commit of this branch in the reflog and then checking it out as a new branch.

Canary Release is the practice of making staged releases in which software changes and update are rolled out to a small number of the users first, so that it can be tested and provide us with feedback. Once the approval is provided and the change is accepted, the update is rolled out to the rest of the users. 

With canaries, the new version of the application is gradually deployed to the server while getting a very small amount of live traffic (i.e., a subset of live users is connecting to the new version while the rest are still using the previous version). 

One of the most effective points of using Canary deployment is that we can push new features more frequently without having to worry that any new feature will harm the experience of our entire user base. 

In Linux, root privileges refer to a user account that has full access to all the files, application and the system functionalities. There are two ways we can reset the password: 

For resetting password as sudo user

We can follow below steps: 

Navigate to terminal in window and we can write below command 

 $ sudo passwd root 

System will prompt to enter the same password that we use for login to system. 

Next, we get prompt for the new password. We need to enter the new password in prompt and also re-enter it on retype password prompt to verify the same. 

In case, we are resetting password as root user: 

We can login to terminal using below command: 

$ su root 

We will be asked to provide the password to which we need to enter current root password.  

Once entered as root user, we can type following command to change password  

$passwd  

In Linux, timestamp is actually stored in the form of number and not in date and time format. So, to make it human readable format Linux convert it into required format of date and time. 

Generally, timestamp is recorded for time when any action is performed on file. 

It helps to keep records of when a file was accessed, modified or added.  

mtime:- Stands for Modification time. It indicates the last time the content of file was modified. 

For example- if new data is added, deleted or edited, the modified timestamp changes  

To view the modified timestamp, we can use below command: 

$ ls - l <filename> 

ctime:- Stands for change time. It refers to the change in the file property and indicate last time when some Metadata of file was changed.  

Example- if permission of file were modified by user or by some automated code, ctime will show the same. 

To view the modified timestamp, we can use below command: 

$ ls - lc <filename> 

atime:- Stands for access time. It refers to timestamp when file' content is read by application or user using grep or cat command but did not necessary modify anything in file. 

To view the modified timestamp, we can use below command: 

$ ls - lu <filename> 

xargs in linux is used to build and execute command from the standard input. 

It takes the input and convert it into command argument for another command. 

There will be some commands in Linux that accepts the argument, this is where xargs came as rescue. This is mainly used in file management where xargs is used along with rm, cp commands. 

Syntax to use xargs in linux: 

xargs <option> [command] 

Few of most common used xargs options are: 

  • 0: This will input items which are terminated by null character instead of white spaces 
  • a file: This will read items from file instead of the standard input 
  • delimiter = delim : It will input items that are terminated by a special character 
  • p: prompt the user about whether to run each command line and read a line from terminal. 
  • r: If the standard input does not contain any non-blanks, do not run the command 
  • x: exit if the size is exceeded. 
  • help: It will print the summary of the options to xargs and exit 
  • version: This will print the version no. of xargs and exit 

To block the ip address in Linux, we need to use iptables and netfilter firewall along with login as root user. 

To block the IP Address, we can use the below command: 

iptables -A INPUT -s <IP-ADDRESS> -j DROP 

In case if we want to block the access to one port from an ip address to port n then we can below command: 

iptables -A INPUT -s <ip-address> -p tcp --destination-port n -j DROP 

The above rule will drop all packets coming from IP- address to port mail server port n. 

In case, we want to unblock these IP: 

 iptables -D INPUT -s xx.xxx.xx.xx -j DROP 
iptables -D INPUT -s <ip-address> -j DROP 
service iptables save 

We can refresh static component of deployed application using Weblogic. Deployer can be defined a specify component and target a server through the following syntax: 

java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets server1,server2 -deploy jsps/*.jsp 

Advanced

We can use below commands to check if the list of branches is merged into master: 

  • Git branch --merged 

This command will help to get list of branches merged into HEAD (i.e., current branch) 

  • Git branch --no-merged 

This Git command will list the branches that have not been merged into current branch 

  • Git branch --merged master 

This will list branches merged into master 

Note: By default, this applies only to the local branches. If we apply -a flag will show both local and remote branches, and the -r flag shows only the remote branches. 

Sudo stands for Super User DO where the super user is the root user of Linux and used as prefix "sudo" with any command to elevate privileges allowing user to execute command as another user and execute command at their root level.  

To use sudo command, user needs to be added in sudoers file located at /etc path.

Git reflogs record when the tips of branches and other references were updated in the local repository and also maintains the branches/tags for log history that was either created locally or checked out. Reflogs are useful in various Git commands, to specify the old value of a reference. It can be used for recovery purpose.

For recovery purpose, we need to either create it locally or can checkout from remote repository to store reference logs.

Reflogs command shows commit snapshot of when the branch was created, renamed or commit details maintained by Git. Let's take an example where we have HEAD@ {5} which refer to "where HEAD used to be five moves ago", master @{two.weeks.ago} which refer to "where master used to point to two weeks ago in this local repository".

Now, Git log will show the current HEAD and all ancestral details of its parent. Basically, it will print where the commit HEAD pointing to, then its parent, then its parents and so on.

On other side, Git reflog doesn't show HEAD's ancestry details. Git  reflog is an ordered list which shows   the commits that HEAD has pointed to: it's undo history for our repository. 

Git reflogs record when the tips of branches and other references were updated in the local repository and also maintains the branches/tags for log history that was either created locally or checked out. Reflogs are useful in various Git commands, to specify the old value of a reference. It can be used for recovery purpose. 

For recovery purpose, we need to either create it locally or can checkout from remote repository to store reference logs. 

Reflogs command shows commit snapshot of when the branch was created, renamed or commit details maintained by Git. Let's take an example where we have HEAD@ {5} which refer to "where HEAD used to be five moves ago", master @{two.weeks.ago} which refer to "where master used to point to two weeks ago in this local repository". 

Now, Git log will show the current HEAD and all ancestral details of its parent. Basically, it will print where the commit HEAD pointing to, then its parent, then its parents and so on.

On other side, Git reflog doesn't show HEAD's ancestry details. Git  reflog is an ordered list which shows   the commits that HEAD has pointed to: it's undo history for our repository.

Blue Green Deployment is type of continuous deployment that consist of two identical environments Blue and Green, both running on production version, but configured in way where one is Live and other is idle. It focuses mainly on redirecting the traffic between two environments running with a different version of the application.  

This Deployment pattern reduces downtime and the risk which can occur due to the deployment. In case any error occurs with the new version, we can immediately roll back to the stable version by swapping the environment. 

To implement Blue-Green deployment, there should be two identical environments. Also, this requires Router or Load Balancer so that traffic can be routed to the desired environment 

Here one of the either blue or green environment would indicate the old version of the application whereas the other environment would be the new version. 

The production traffic would be moved gradually from the old version environment to the new version of the environment and once it is fully transferred, the old environment is kept on hold just in case of the rollback necessity. 

We can implement Blue Green deployment in AWS by using Elastic Beanstalk service  and then swapping Applicationwhich can help us in providing the  services for the automation of deployment process. Elastic Beanstalk helps in making the deployment process easy. Once we upload the application code with some version on Elastic Beanstalk and provide information about the application, it deploys our application in the Blue Environment and provide us with the  URL.  The above Environment configuration is then copied and used to launch the new version of application-i.e. Green Environment with its own different and separate URL. 

This point of time our application is Up with the two environments but the traffic is navigated to  only to one that is Blue Environment.  

For Switching the environment to Green and re-directing the traffic to it, we need to choose other Environment details from Elastic Beanstalk Console and Swap it using Action menu. It leads Elastic Beanstalk to perform the DNS Switch and once DNS changes are done, we can terminate the Blue Environment. In this way, traffic will be redirected to Green Environment. 

In case of  Rollback required, we need to invoke the Switch Environment URL again. 

Other than this ,there are a number of other solutions that AWS provides which we can use for implementing Blue Green deployment in our application , some of them are as follow: 

  • DNS Routing with Route53 
  • Swapping of Autoscaling Group with ELB 
  • Blue-Green Deployment using AWS Code Deploy 
  • Cloning Stack in OpsWork and Updating DNS. 

Blue Green deployment provide many benefits to the DevOps team  and  proven to be  useful in deploying new application features or fixing  the software bug fix or issues .But it can be used only under below scenarios: 

  • There should be identical and isolated environments. 
  • There should be the provision for router or Load Balancer. 
  • System should work with Continuous Update.

The above factor can lead to increase in cost. Since project has to bear the  cost of two production environments and maintaining them. But costing factor can be controlled and managed well a  little bit , if planned in the proper way. 

Reference: https://www.knowledgehut.com/blog/devops/blue-green-deployment 

Sanity testing is basically the subset of the regression testing which is performed to ensure that code changes that are made by the Development team are working on the system. The focus of the team during sanity testing process is to validate the functionality of the application and not in detailed testing of each and every component.

We have pre commit in the Git provided by Git repository which is also known as hook, basically  it gets triggered just before the commit happens. We can write a script by making use of this hook to implement the smoke testing.

The script can be used to perform the sanity checks on the changes that are newly added and committed into the repository.

Below is the sample script which is checking if any .py files which are to be committed are properly formatted with the use of the python formatting tool pyfmt (auto formatting source code tool in python). In case the files are not properly formatted, then the script prevents the changes to be committed to the repository by exiting with status 1 

The NRPE stands for ‘Nagios Remote Plugin Executor’ .It is the Nagios agent that allows to execute  Nagios plugins remotely on the other Linux or Unix machines system monitoring using scripts. It helps in monitoring the remote machine metrics such as disk usage, CPU load etc. NRPE can communicate with a few of the Windows agent addons , which helps us to check the metrics of remote windows machines using these  windows agent addons.

Dogpile effect is  referred as  the event when cache expires, and websites are hit by the multiple requests made by the client at the same time. It is also referred to as cache stampede which occur when computing system employs cache which leads to the high load.

One of the common way through which we can prevent dogpile effect is by implementing the semaphore locks in the cache. In this process  when the cache expires in the system, the first process  acquire the lock  and starts generating new value to the cache.

Trigger build remotely provides us with the flexibility which allows triggering a job from the various ways such as the script, command line, hooks at the time when someone has committed a code change or triggers a job . 

We can perform various Jenkins function programmatically using Jenkins Remote Access API .Such as :- 

  • Retrieving information about jobs, views, nodes, builds, etc.  
  • Trigger a build , stopping the build, enable/disable a Job, group/remove jobs into/from views, etc. 
  • Create/copy/modify/delete jobs. 

It can be triggered via CURL, WGET commands as below. We can also use Postman or GitHub webhook to trigger the job. - 

curl -X POST -u user:apitoken http://jenkins/job/yourorg/job/yourrepo/job/master/build 
$wget http://jenkins/job/yourorg/job/yourrepo/job/master/build 
$wget --auth-no-challenge --http-user=username --http-password=api_token http://jenkins/job/yourorg/job/yourrepo/job/master/build 

To track down the commit which introduce bug is generally a quite time consuming process, in case we are not sure about faulty commit. For same, we have Git bisect command, which works on principle of binary search. 

Git bisect start # starts the bisecting session 
Git bisect bad # marks the current revision as bad 
Git bisect good revision # marks the last known good revision 

On running these, Git will checkout a revision halfway between the known “good” and “bad” versions. Post which we can run specs again and mark good or bad accordingly. 

This process continues until we get the commit that introduced the bug. 

Both of these commands are designed to integrate changes from one branch into another branch, but they just do it in very different ways.  

Whenever in doubt, it is always recommended to use Git merge instead of Git rebase 

  • Merge takes all the changes in one branch and merges them into another branch in one commit. 
  • The resulting tree structure of the history (generally only noticeable when looking at a commit graph) is different (one will have branches, the other won't). 
  • Merge and rebase will handle conflicts differently. Rebase will present conflicts one commit at a time where merge will present them all at once. 

Here is the quick comparison: 

MergeRebase

Git merge provide us the feature to merge branches from Git. 

Git rebase allows developers to integrate changes from one branch to another. 

In Git Merge, we can view the complete merging history of commits. 

Whereas in Git rebase, Logs are linear when rebased  

In Git merge, All the commits on the feature branch will be combined as a single commit in the master branch. 

Therefore, it is preferred when the target branch is shared branch 

In Git rebase ,All the commits will be rebased and the same number of commits will be added to the master branch. Therefore, it should be used when the target branch is private branch 

Below are some of the factors that helps to decide when to use merge and rebase commands: 

  • In case of open source or public repositories, it is recommended to use Git merge, since there could be many developer contributing toward branch. 

In the case of using Git rebase for above scenario, it will destroy the branch and result in broken repositories. To correct the same, we need to use Git pull --rebase. 

  • Rebase can result in loss of committed work if not applied correctly. This is destructive operation which can lead to breaking consistency of other developer's contribution. 
  • In any case ,when we need to work on any reverting a commit to previous commit, it become very difficult. In such cases merge operation comes handy. 

Backup and restoring Jenkins data is very much needed in case of disaster recovery, auditing and many more purpose, therefore having good backup and restore of Jenkins instance is critical and highly  important for the application . 

Other than this, few other factors where backup is required: 

  • Disaster recovery. 
  • Recovering an older configuration (an accidental configuration change may not be discovered for some time). 
  • Recovering a file that is corrupted or was deleted accidentally. 

Files to back up: 

$JENKINS_HOME: Backing up the entire $JENKINS_HOME directory preserves the entire Jenkins instance which include - build logs, job configs, plugins, configurations etc. To restore the system, just copy the entire backup to the new system. 

In case taking back up only for the configuration file separately ,  we can check in the $JENKINS_HOME directory. ./config.xml where the main Jenkins configuration files are stored. Other configuration files also have the .xml suffix. Specify $JENKINS_HOME/*.xml to back up all configuration files. 

Configuration files can also be stored in an SCM repository. This keeps copies of all previous versions of each file that can be retrieved using standard SCM facilities. 

We have Jenkins Thin Backup plugin which is used to take back up in Jenkins. This helps to backs up of  all the data based on the schedule that we mention and handles the backup retention as well.  

We simply need to install the 'thin backup plugin' in Jenkins under the Manage section  and enable from the setting tab along with the  detail such as what to back up and the backup directory.  

Restoring: 

  • Backup files are in the tar+zip format. 
  • We can Copy these to another server where it can be unzip and un-tar. 

Docker works on client-server architecture, which includes three main components i.e. Docker client, Docker Host and Docker registry. 

The Docker client talks to Docker Daemon, which runs on the host operating system and listens for Docker API requests and manages the Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with the other daemons to manage Docker services 

The Docker client and Daemon can run on the same system. The different types of Docker components in a Docker architecture are– 

Docker Client: This performs Docker build pull and run operations to establish communication with the Docker Host. This is the primary way where users interact with Docker. When we run the docker commands the client sends these commands to dockerd, which carries them out. Docker API is used by Docker commands. It is possible for Docker client to communicate with more than one daemon. 

Docker Host: This It contains the docker daemon, images, containers, networks, and storage. The images will be the kind of metadata for the applications which are containerized in the containers. The Docker Host is used to provide an environment to execute and run applications.

Docker Object: When we use Docker, we are creating and using images, containers, networks, volumes, plugins, and other objects. Below is the brief overview of some of those objects. 

  1. Images: An image a read-only template which contain instructions for creating a docker container. It is used to store and ship applications. We can either ther create our own images or can only use the images that are created by others and published in a registry. To build own image, we need to create a Docker file by defining the steps needed to create the image and run it. Each instruction in a Docker file creates a layer in the image. When you change the Docker file and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
  2. Container: A container is a runnable instance of an image. With the help of Docker API or CLI, we can start, stop, delete or move a container. We can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
  3. Networks : Docker networking provides complete isolation for docker containers. It requires very less Operating system instances to run the workload.

There are primarily 5 network drivers in docker: 

  1. Bridge: It is the default network driver. We can use this when your application is running on standalone containers i.e. different containers communicate with the same docker host. 
  2. Host: This driver removes the network isolation between docker containers and docker host. When we don’t need any isolation between the container and host then it is used. 
  3. Overlay: For communication with each other, it will enable the swarm services. 
  4. None: It disables all networking. 
  5. macvlan: This driver assigns mac(Media Access control) address to containers to make them look like physical devices. It routes the traffic between containers through their mac addresses.  
  6. Storage: We can store data within the writable layer of the container but it requires a storage driver. Storage driver controls and manages the images and containers on our docker host.

With respect to persistent storage, Docker offers four options:- 

  1. Data Volumes 
  2. Volume Container 
  3. Directory Mounts 
  4. Storage Plugins 

Docker Registry: All the Docker images are stored in docker registry. The public registries which is also known as Docker Hub or Docker Cloud is accessible for everyone and can be used by anyone. 

We have the privilege to run our private Registry as well, by using Docker pull and Docker run command , which will pull the required image from private configured registry. 

Reference: https://docs.docker.com/get-started/overview/ 

The "Error response from daemon: Timeout was reached before node joined." failure happens when the ‘manager docker machine is not active due to which  the new node machine will not be able to join the swarm cluster. 

To fix this: 

Step 1: Check for the active machine hosts as: 

$ docker-machine active 

Step 2: Activate the ‘manager’ machine as: 

$ eval $(docker-machine env manager) 
$ docker-machine active 

Step 3: Get the swarm join token as worker 

$docker swarm join-token worker 

Step 4: Connect to the worker machine say: worker1  

$docker-machine ssh worker1 

Step 5: Now while running the swarm join token command, it should be successful. 

Handlers are special Task that get executed only when triggered or called by a notifier in another Task. 

The difference between handlers and regular Task is that handlers run only when notified using notify directive. And executed generally at end of file after all Task are completed.

It is generally used in the playbook where configuration changes are need to be implemented due to which restart, start ,reload and stop services needs to be implemented.

Example for Ansible Handlers- Taking a scenerio, when we want to restart the Apache HTTP server after configuring virtual host using Ansible playbook . In this case , we first need to specify two Task in playbook- one Task is to configure the virtual host file and other Task will be to restart Apache. 

But here the challenge is Apache service will be restarted with each successive playbook run irrespective of anything changed on configuration file. 

That is because there is Task that explicitly restart Apache and doesn't take into account about the state of Apache service. Also, restarting the Apache service on each run is not desirable ,because it can lead to resource overhead.  

Here handlers came into picture and can be used as rescue.  

We can write playbook consist of regular Task and a handler. The regular Task install the Apache http server on target system. Once installed, notifier notifies all the handlers Task which start the Apache server. 

Also, like we discuss earlier, regular Task will be executed prior to handler. 

Now, in this case if playbook is executed again, the handler Task will not run because state and configuration remain unchanged for Apache service .Therefore can prevent unnecessary Apache restart. 

Yes we can use JSON instead of YAML . 

To run docker compose with JSON, below can be used: 

docker-compose -f docker-compose.json up 

This error occur when we are  trying  to add a node that  already using  the node in our Docker swarm earlier. So if we  want to add the node in another swarm, then we  need to leave from the earlier swarm by using the command: 

$ docker swarm leave --force 
Node left the swarm. 

RAM in computer are actively used to store data and program. And required at time of running the program. 

Other than this we have second type of memory in Linux system, i.e. Swap Space  

Swap Space in Linux is used to provide /substitute the disk Space to RAM when it gets filled up and system require more space to run program and store data. 

Swap Space in Linux is used when RAM get full, but can't be considered as replacement for more RAM. Example - if we have data storing excel that is running and as of now, no extra space required. But as time proceed when data grows, and multiple other application running that could lead to fill up of RAM.  

Here swap Space came as rescue which can help the application keep running. 

Types of Swap Memory: - 

Swap Space can be dedicated swap partition, swap file, combination of swap partition and swap files. 

Swap Partition: This is the by default swap memory, similar to hard drive dedicated to system. 

Swap File- A swap file system creates temporary file storage in case there is no sufficient amount of space left in hard drive to create Swap partition. It swaps the section of RAM storage from idle program and frees up the memory for other process 

With the help of same, more memory can be used by computer than earlier physically present. 

Amount of system RAM Recommended for Swap space:- 

2GB or less RAM , Swap should be twice the RAM 

RAM between 2GB to 8GB, swap pace should be Same as RAM 

RAM between 8GB to 64GB, swap Space should be 0.5 times the RAM 

For RAM more than 64GB ,swap space is workload dependent. 

Creating the swap space and enable in Linux: 

  • Login as super user.
su 
password: root-password
  • We need to create a file in a selected directory to add swap space by typing:
dd if=/dev/zero of=/<dir> /<myswapfile> bs=1024 count=<number_blocks_needed> 

where <dir>  is a directory in which you have permission to add swap space. The myswapfile is the name of the swap file you are creating. The number_blocks_needed is an amount of 1024-byte blocks you want to create 

  • Verify the file was created by typing:
ls -l /dir/myswapfile 
  • Initialize the new swap area by typing:
mkswap /dir/myswapfile 
  • Run the swapon command to enable the new swap space for paging and swapping by typing the following:
swapon -a /dir/myswapfile 
  • Verify that the extra swap space was added by typing: 
swapon -s 

The output shows the allocated swap space. 

To achieve the order of execution, we can use "depend_on" which needs to be added in docker-compose.yml file. 

Example: 

version: "2.4" 
services: 
 backend: 
build: . 
depends_on: 
- db 
 db: 
image: postgres 

Addition of the service dependencies has various causes and effects. In this example, docker-compose up command starts DB container before backend and run as mentioned in service dependency  

Description

Long Description:

These above one is some of the most common DevOps interview questions that you can come across the interview. As a DevOps Engineer, in-depth knowledge of CI/CD processes, automation tools, and the relevant technology is essential along with a good understanding of the products, services, and the system. DevOps is focused to improve the speed of development and improve problem-solving and innovation in the production environment and enhance the reliability of applications. Not only this, DevOps helps in bridging the gap between the conflict of goals and priorities of the developers (constant need for change) and the operations (constant resistance to change) team by creating a smooth path for Continuous Development and Continuous Integration. Being a DevOps engineer has huge benefits due to the ever-increasing demand for DevOps practice around the organization. And above DevOps interview questions and answers along with DevOps certification will help you get knowledge about some of the aspects. 

In case you want to go through in-depth knowledge about each and every DevOps topic, you can easily enrol in our comprehensive DevOps Engineer Training Course

Read More
Levels