upGrad KnowledgeHut SkillFest Sale!

Shell Scripting Interview Questions and Answers 2024

A shell script has loop and conditional control structures that let you choose which commands to run or repeat Linux instructions. It is an effective tool for system administrators, developers, and power users since it can automate repetitive activities to save time and use efficiently. Whether you're beginner or experienced user, these shell scripting interview questions in this guide cover a range of topics, including shell commands, variables, functions, control structures. For more experienced users, advanced topics like Docker, Kubernetes as well as best practices for writing clean, efficient code. You'll learn how to use command-line tools like awk, sed, and grep to streamline your workflow. With detailed explanations for each question, this guide is a great resource for anyone who wants to gain a deeper understanding of shell scripting. It's perfect for those who are preparing for an interview or looking to take their skills to the next level.

  • 4.7 Rating
  • 60 Question(s)
  • 30 Mins of Read
  • 6297 Reader(s)

Beginner

A program written in a shell programming language, such as bash, csh, or ksh, is referred to as a shell script. Shell scripts are used to automate operations that are frequently carried out with a shell, such as launching a web server, configuring a development environment, or delivering software.

Shell scripts are advantageous because they enable the automation of several commands that would otherwise require human execution. This can cut down on time and error-prone work. Additionally portable, shell scripts can be executed on any machine that has a compatible shell set up.

Shell scripts are frequently used in software development, DevOps, and system management chores. They can be utilized to automate processes for maintaining infrastructure, producing, and deploying software, and conducting tests. Additionally, they can be used to automate platform- or operating-system-specific operations.

Shell scripts can be used to develop unique utilities and tools, as well as to increase the functionality of already existing tools, in addition to automating operations. They can be used to execute operations that would be challenging or time-consuming to perform manually or to automate repetitive or complex processes.

To create a shell script, I will need a text editor, such as gedit, vim, or emacs. I can also use a word processor, such as Microsoft Word or Google Docs, but have to make sure to save the file as plain text.

To create a shell script, I will follow these steps: 

  • Open a text editor or word processor and create a new file. 
  • At the top of the file, I will add a shebang line that specifies the path to the shell that I want to use to interpret the script. For example, to use bash, I would add the following line at the top of the file: 

#!/bin/bash 

  • Below the shebang line, I will add the commands that I want to execute in my script. 
  • Save the file with a .sh extension, such as script.sh. 
  • Make the script executable by running the following command: 

chmod +x script.sh 

  • Run the script by typing: 

./script.sh 

I can also run the script by specifying the path to the interpreter followed by the path to the script, like this: 

/bin/bash script.sh 

It is a good practice to include comments in our script to explain what the different parts of the script do. In a shell script, comments are denoted by a pound sign (#). Anything following the pound sign on the same line will be treated as a comment and ignored by the shell. 

For example: 

# This is a comment 

echo "Hello, world!" # This is also a comment 

This is one of the most frequently asked UNIX Shell Scripting interview questions.  

The #! (shebang) is a special line at the beginning of a shell script that tells the operating system which interpreter to use to execute the script. The shebang line consists of a pound sign (#) and an exclamation point (!) followed by the path to the interpreter executable. For example, the shebang line for a bash script might look like this: 

#!/bin/bash 

The shebang line is not required for all shell scripts, but it is a good practice to include it so that the script can be run directly from the command line without having to specify the interpreter. For example, if we include the shebang line in a script and make the script executable (using chmod +x script.sh), we can run the script simply by typing ./script.sh instead of bash script.sh. 

Without a shebang line, the operating system will not know which interpreter to use to execute the script, and the script will not be able to run. 

In a shell script, comments are lines of text that are not executed as part of the script but are included to provide documentation and explanations for the code. Comments are used to make the code easier to understand and to help other developers (or future versions of ourselves) understand what the code is doing.

To include a comment in a shell script, we can use the # character followed by the text of the comment. For example: 

# This is a comment 

Anything following the `#` character on the same line will be treated as a comment, so you can use comments to annotate specific lines of code: 

# Increment the value of the counter variable 

counter=$((counter+1)) 

We can also use comments to include multi-line blocks of documentation or explanations: 

: ' 

This is a multi-line comment. 

This comment block can be used to include 

detailed explanations or documentation for 

the code below. 

' 

Note that the `:` command is a shell built-in that does nothing, so it can be used to create a block of comments without actually executing any code. The single quotes around the comment block are necessary to prevent the shell from interpreting the # characters as the start of a comment. 

Expect to come across this popular question in UNIX Scripting interviews.  

To run a shell script, I will need to have a shell interpreter installed on my system. Common shell interpreters include bash, csh, and ksh. 

To run a shell script, I will follow these steps: 

I will make sure that the script is executable. I can make a script executable by running the chmod command and setting the executable flag: 

chmod +x script.sh 

Run the script by typing: 

./script.sh 

Alternatively, we can specify the path to the interpreter followed by the path to the script, like this: 

/bin/bash script.sh 

We can replace /bin/bash with the path to the shell interpreter that we want to use. 

We can also run a shell script by passing it to the interpreter as a command-line argument: 

bash script.sh 

Replace `bash` with the name of the shell interpreter that we want to use. 

The path to the shell interpreter we want to utilize should be specified in a shebang line at the beginning of our script. This allows us to run the script by simply typing `./script.sh`, regardless of the default shell interpreter on our system. 

For example, if our script is written in bash, we can include the following shebang line at the top of the script: 

#!/bin/bash 

This tells the system to use the bash interpreter to run the script. 

We can also specify command-line arguments when running a shell script. These arguments are passed to the script as variables and can be accessed within the script using the `$1`, `$2`, `$3`, and so on. 

For example, the following script prints the first command-line argument: 

#!/bin/bash 

echo "The first argument is $1" 

To run the script and pass it a command-line argument, we can type: 

./script.sh foo 

This will print "The first argument is foo". 

In a shell script, we can perform input and output redirection using the < and > symbols. 

For example, to redirect the output of a command to a file, we can use the > symbol followed by the name of the file. For example: 

# Redirect the output of the "ls" command to a file called "directory_listing.txt" 

ls > directory_listing.txt 

To append the output of a command to a file, you can use the >> symbol instead of >. For example: 

# Append the output of the "ls" command to the file "directory_listing.txt" 

ls >> directory_listing.txt 

To redirect the input of a command from a file, you can use the < symbol followed by the name of the file. For example: 

# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt" 

sort < unsorted_list.txt > sorted_list.txt 

You can also use multiple redirections in a single command. For example: 

# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt", then display the results on the screen 

sort < unsorted_list.txt > sorted_list.txt | cat 

In this example, the output of the sort command is redirected to the file "sorted_list.txt", and the cat command is used to display the contents of the file on the screen. 

Note, that these redirections can also be used in combination with other shell commands and constructs, such as loops and conditional statements. 

Below is a list of some common shell commands that you might use in a shell script:

  • cd: Change the current working directory 
  • ls: List the contents of a directory 
  • mkdir: Create a new directory 
  • mv: Move or rename a file or directory 
  • rm: Remove a file or directory 
  • cp: Copy a file or directory 
  • echo: Display a message or the value of a variable on the screen 
  • cat: Display the contents of a file on the screen 
  • grep: Search for a pattern in a file or standard input 
  • sort: Sort the lines of a file or standard input 
  • uniq: Remove duplicate lines from a file or standard input 
  • wc: Count the number of lines, words, or bytes in a file or standard input 
  • find: Search for files or directories that match a particular pattern 
  • date: Display the current date and time 
  • time: Measure the execution time of a command 

These are just a few examples, and there are many more shell commands that we can use in a shell script. Some commands are specific to certain shells (e.g., Bash, Zsh, etc.), while others are available in most shells. 

This is one of the most commonly asked shell scripting commands interview questions. 

There are many different shell environments that are commonly used, including: 

  • Bash (Bourne Again Shell): Bash is a Unix shell and command language that is widely used as the default shell on many Linux and macOS systems. It is also available on other platforms such as Windows. Bash is known for its powerful scripting capabilities and its support for a wide variety of commands and utilities. 
  • Zsh (Z Shell): Zsh is another Unix shell that is similar to Bash, but it has some additional features such as better support for interactive use and better handling of command line editing. 
  • Fish (Friendly Interactive Shell): Fish is a Unix shell that is designed to be more user-friendly and interactive than other shells, with features such as auto-suggestions and syntax highlighting. 
  • Csh (C Shell): Csh is a Unix shell that was developed as an alternative to the Bourne shell (sh). It is known for its support for aliases and history substitution, as well as its C-like syntax. 
  • Ksh (Korn Shell): Ksh is a Unix shell that was developed as an improvement on the Bourne shell. It is known for its support for functions and advanced shell scripting features. 
  • Tcsh (Tenex C Shell): Tcsh is a Unix shell that is based on the C shell, with additional features such as command line editing and improved support for variables and arrays. 
  • Dash (Debian Almquist Shell): Dash is a Unix shell that is used as the default shell on some Debian-based systems. It is a lightweight shell that is designed to be faster than other shells, but it has fewer features than some other shells. 

In a shell script, command line arguments are stored in special variables. The first argument is stored in the variable $1, the second argument is stored in the variable $2, and so on. The variable $0 contains the name of the script itself. 

Below is an example of a simple script that uses command line arguments: 

#!/bin/bash 
echo "The first argument is: $1" 
echo "The second argument is: $2" 
echo "The third argument is: $3" 

To use command line options in a script, we can use the getopts built-in command. This command allows us to specify which options the script should accept, and then provides a way for the script to access the values of those options. 

Here is an example of a script that uses the getopts command: 

#!/bin/bash 
while getopts ":a:b:" opt; do 
case $opt in 
  • a)
echo "Option -a was specified with value $OPTARG" 
;; 
  • b)
echo "Option -b was specified with value $OPTARG" 
;; 
\?) 
echo "Invalid option: -$OPTARG" 
exit 1 
;; 
:) 
echo "Option -$OPTARG requires an argument." 
exit 1 
;; 
esac 
done 

In this example, the script will accept two options, -a and -b, which can be followed by a value. The getopts command will parse the command line arguments and set the variables $opt and $OPTARG accordingly. The script can then use these variables to determine which options were specified and what their values were. 

In a shell script, we can use the for loop to iterate over a sequence of values. The syntax for a for loop is: 

for variable in list 
do 
command1 
command2 
... 
done 

Here is an example of a for loop that iterates over a range of numbers: 

#!/bin/bash 
for i in {1..10} 
do 
echo "$i" 
done 

This script will output the numbers 1 through 10. 

You can also use the while loop to execute a block of commands repeatedly while a particular condition is true. The syntax for a while loop is: 

while condition 
do 
command1 
command2 
... 
done 

Here is an example of a while loop that counts down from 10 to 1: 

#!/bin/bash 
i=10 
while [ $i -gt 0 ] 
do 
echo "$i" 
i=$((i-1)) 
done 

This script will output the numbers 10 through 1. 

We can also use the break and continue statements to control the flow of a loop. The break statement will cause the loop to terminate early, while the continue statement will skip the rest of the current iteration and move on to the next one. 

There are several techniques you can use to debug a shell script: 

  1. Use the set -x option to enable script tracing. This will cause the script to print each command to the terminal before it is executed, which can help you see what is happening at each step. 
  2. Use the bash -n script.sh command to check the syntax of the script without running it. This can help you find syntax errors that would otherwise cause the script to fail. 
  3. Use the bash -v script.sh command to run the script in verbose mode. This will cause the script to print each command to the terminal before it is executed and can also help you see what is happening at each step. 
  4. Use the bash -x script.sh command to run the script with debugging enabled. This will cause the script to print each command to the terminal before it is executed, and also print the values of variables and other debugging information. 
  5. Use the echo command to print the values of variables and other expressions to the terminal. This can help you see what is happening at each step and identify any problems. 
  6. Use the bash debugger (bashdb) to step through the script and examine the values of variables and other expressions. 
  7. Use the trap command to set a debugging trap that will cause the script to stop and enter the debugger whenever a specified signal is received. This can be useful for debugging scripts that run for a long time or that are difficult to reproduce.

There are a few situations where shell programming/scripting may not be the best choice: 

  • Complex/Hard project Tasks: If the task you are trying to automate is very complex and involves a lot of processing or manipulation of data, a shell script may not be the most efficient or effective solution. In these cases, a more powerful programming language such as Python or Java may be more suitable. 
  • Performance-critical Tasks: If the task you are trying to automate is performance-critical and needs to run as quickly as possible, a shell script may not be the best choice. This is because shell scripts can be slower to execute than programs written in compiled languages. 
  • Cross-platform Compatibility: If you need your script to run on multiple platforms, such as Windows and Linux, shell scripts may not be the best choice. This is because shell scripts are specific to Unix-like operating systems and may not run-on other platforms without significant modification. 

That being said, shell scripts are still a very useful tool for automating a wide variety of tasks and can be a good choice in many situations. It's important to consider the specific requirements of your task and choose the right tool for the job. 

The default permissions of a file when it is created depends on the umask (user mask) of the user who is creating the file. The umask is a value that determines the default permissions for newly created files and directories. It is specified in octal notation, and the permissions it specifies are subtracted from the default permissions set by the system. 

For example, if the umask is set to 022 (octal), the default permissions for a newly created file will be 644 (rw-r--r--). This means that the owner of the file will have read and write permissions, and others will have read-only permissions. 

If the umask is set to 002 (octal), the default permissions for a newly created file will be 664 (rw-rw-r--). This means that the owner and members of the owner's group will have read and write permissions, and others will have read-only permissions. 

We can view and change the umask of our user account using the umask command. For example, to set the umask to 022 (octal), we can use the command umask 022. 

Keep in mind that the default permissions of a file may also be influenced by the permissions of the directory in which the file is being created, and by any default ACLs (access control lists) that are set on the system. 

There are four key elements of a Linux file system: 

  • Superblock: This is a special block that contains information about the file system as a whole, such as the total number of blocks, the number of free blocks, and the layout of the file system. 
  • Inode Table: This is a table that stores information about each file and directory in the file system, such as the owner, permissions, size, and location of the file data. 
  • Data Blocks: These are the blocks that actually contain the data for the files and directories in the file system. 
  • File Names: These are the names that are used to identify the files and directories in the file system. Each file and directory have a unique file name that can be used to access it. 

The kernel is the central part of a computer operating system that controls all the other parts of the system. It is responsible for managing the hardware and software resources of the system, including the CPU, memory, and input/output devices.

The kernel is the lowest level of the operating system and is responsible for interfacing with the hardware and providing a platform for other software to run on top of. It is a fundamental part of the operating system and is essential for the proper functioning of the computer.

In Linux, we can make a variable unchangeable by using the readonly builtin command in the Bash shell. This command can be used to mark a shell variable as read-only, which means that it cannot be modified or unset. 

Below is an example of how to use the readonly command: 

$ x=10 
$ readonly x 
$ x=20 

bash: x: readonly variable 

In the example above, the variable x is first set to the value 10. The readonly command is then used to mark x as a read-only variable. Finally, an attempt is made to change the value of x to 20, but this results in an error because x is a read-only variable. 

It is important to note that the readonly command only works within the Bash shell and does not affect the variables in other processes or programs. It is also specific to the shell in which it is used, so a variable marked as read-only in one shell may not be read-only in another shell. 

Shell scripting has several drawbacks that you should be aware of:

  • Lack of Portability: Shell scripts are not portable across different operating systems and shells, so you may have to make changes to your scripts if you want to use them on a different system. 
  • Lack of Data Types: Shell scripts do not have data types, which can make it difficult to write complex scripts that perform tasks such as data validation or parsing. 
  • Lack of Error Handling: Shell scripts do not have exception handling or debugging facilities, which can make it difficult to track down and fix errors in your scripts. 
  • Limited Functionality: Shell scripts are limited in their capabilities compared to other programming languages, so you may not be able to perform certain tasks as easily or efficiently. 
  • Security Vulnerabilities: Shell scripts can be vulnerable to security threats such as injection attacks if they are not written carefully. 

Overall, shell scripting is a useful tool for automating tasks and interacting with the command line, but it may not be the best choice for larger or more complex projects. In these cases, you may want to consider using a more powerful programming language such as Python or C++. 

It's no surprise that this question pops up often in Shell Scripting interviews.  

There are several ways to create a shortcut in Linux, depending on what we want to do and the desktop environment we are using. Here are a few options: 

  • Create a Symbolic Link: We can use the ln command to create a symbolic link, which is a special type of file that points to another file or directory. For example: ln -s /path/to/original /path/to/link, this will create a symbolic link called link in the /path/to/ directory that points to the /path/to/original file or directory. 
  • Create a Desktop Shortcut: If we are using a desktop environment such as GNOME or KDE, we can create a desktop shortcut by right-clicking on the desktop and selecting "Create Launcher." This will open a dialog where we can enter the name and command for the shortcut. 
  • Add a Command Alias: We can use the alias command to create an alias for a command, which allows us to execute the command by typing the alias instead of the full command. For example: alias l='ls -la', this will create an alias called l that runs the ls -la command when you type it. 
  • Create a Keyboard Shortcut: Many desktop environments allow us to create keyboard shortcuts for frequently used commands. We can typically do this by going to the settings or preferences for our desktop environment and looking for the option to create custom keyboard shortcuts. 

It is important to note that the steps to create a shortcut may vary depending on our specific Linux distribution and desktop environment. 

In Linux, shell programs (also known as shell scripts) are typically stored in files with a .sh file extension. These files can be stored in any directory on the system, but there are a few common locations that are used:

  • /bin/: This directory contains executables that are available to all users on the system. 
  • /usr/local/bin/: This directory contains executables that are installed by the system administrator and are available to all users on the system. 
  • /usr/bin/: This directory contains executables that are part of the operating system and are available to all users on the system. 
  • /usr/local/sbin/: This directory contains executables that are used for system administration tasks and are only available to the root user. 
  • /usr/sbin/: This directory contains executables that are used for system administration tasks and are only available to the root user. 

It is important to note that these directories are just conventions, and we can store the shell scripts in any directory on the system. However, using one of these directories can make it easier for users to access and run the scripts. 

A common Shell Scripting interview question for DevOps, don't miss this one.  

In Linux, a hard link is a type of link that points directly to the inode of a file, while a soft link (also known as a symbolic link or symlink) is a type of link that points to the file name of a file. Here are some key differences between hard links and soft links: 

Hard Links 

  • Can only be created for files, not directories 
  • Point directly to the inode of a file, which is a unique identifier for a file on the file system 
  • Can be created on the same file system as the original file 
  • Do not affect the original file's permissions or ownership 
  • If the original file is deleted, the hard link will still point to the inode, which may contain data until the inode is released by the operating system 

Soft Links 

  • Can be created for files or directories 
  • Point to the file name of the original file 
  • Can be created on a different file system than the original file 
  • Inherit the permissions and ownership of the original file 
  • If the original file is deleted, the soft link will be broken and will no longer point to a valid file 
  • Overall, hard links are more efficient and have a lower overhead than soft links, but they are more limited in their functionality. Soft links are more flexible, but they can be more prone to errors and are not as efficient as hard links. 

Intermediate

In a shell script, we can define a function by using the following syntax: 

function function_name { 
commands 
} 

For example, the following script defines a function called greet that prints a greeting: 

#!/bin/bash 
function greet { 
echo "Hello, world!" 
} 
Greet 

We can also define a function using the function keyword followed by the function name, like this: 

#!/bin/bash 
function greet { 
echo "Hello, world!" 
} 
Greet 

To call a function, simply type its name followed by a set of parentheses. For example: 

#!/bin/bash 
function greet { 
echo "Hello, world!" 
} 
Greet 

We can also pass arguments to a function by including them within the parentheses when calling the function. For example: 

#!/bin/bash 
function greet { 
echo "Hello, $1!" 
} 

greet John 

Inside the function, the arguments are referred to as $1, $2, $3, and so on. The first argument is $1, the second is $2, and so on. 

We can use functions to modularize the script and make it easier to read and maintain. Functions can be used to perform a specific task, and then called multiple times throughout the script. This can make it easier to reuse code and avoid repeating the same code multiple times.  
It is one of the basic questions asked for an intermediate level linux shell scripting interview questions. 

One of the most frequently posed UNIX and Shell Scripting interview questions, be ready for it.  

In a shell script, we can use variables to store data and manipulate it. To define a variable, we can use the following syntax: 

variable_name=value 

For example, the following script defines a variable called message and assigns it the value "Hello, world!": 

#!/bin/bash 
message="Hello, world!" 
echo $message 

To access the value of a variable, we can use the dollar sign ($) followed by the variable name. For example: 

#!/bin/bash 
message="Hello, world!" 
echo $message 

We can also use variables to store the output of a command. For example: 

#!/bin/bash 
current_directory=$(pwd) 
echo "The current directory is $current_directory" 

We can also use variables to store the results of arithmetic expressions. For example: 

#!/bin/bash 
x=5 
y=3 
result=$((x + y)) 
echo "The result is $result" 

In a shell script, it is important to remember that variables are case-sensitive and that they must be referenced using the dollar sign ($). 

It is also a good practice to use descriptive names for variables to make the code easier to read and understand, and it can be helpful in bash scripting interview questions. 

Here are some common shell scripting errors and ways to troubleshoot them: 

  • Syntax Errors: These occur when the shell encounters a command or script that it doesn't recognize as valid. To troubleshoot syntax errors, we need to check for typos, making sure that we are using the correct syntax for the command or script we are trying to run, and check for missing or extra characters. 
  • Missing or Incorrect Permissions: If we are trying to run a script or command that requires certain permissions, and don't have those permissions, we will get an error. To troubleshoot this error, we need to make sure we have the necessary permissions, or run the script or command with sudo to execute it with superuser privileges. 
  • Variable Expansion Errors: These occur when we are trying to use a variable in a script, but the variable has not been set or is set to an incorrect value. To troubleshoot this error, we need to make sure that the variable has been set correctly, or use the echo command to print the value of the variable to the console to see what it is set to. 
  • Redirection Errors: These occur when we are trying to redirect the output of a command or script to a file, but there is an error in the redirection syntax. To troubleshoot this error, we need to check the redirection syntax to make sure it is correct, and the file we are trying to write exists and has the correct permissions. 
  • Command not Found Errors: These occur when we are trying to run a command that the shell can't find. This can happen if the command is not installed on our system, or if it is installed but not in our PATH. To troubleshoot this error, we need to make sure the command is installed and, in our PATH, or specify the full path to the command. 

Shell scripts often use constructs such as loops and conditional statements to perform tasks. Below are some examples:

for loop: The for loop is used to iterate over a list of items. For example: 

for i in 1 2 3 4 5 
do 
echo $i 
done 

This script will print the numbers 1 through 5 to the console. 

while loop: The while loop is used to execute a block of code repeatedly while a certain condition is met. For example: 

counter=0 
while [ $counter -lt 5 ] 
do 
echo $counter 
counter=$((counter+1)) 
done 

This script will print the numbers 0 through 4 to the console. 

if statement: The if statement is used to execute a block of code if a certain condition is met.  

For example: 

if [ $1 -gt 5 ] 
then 
echo "The first argument is greater than 5" 
else 
echo "The first argument is not greater than 5" 
fi 

This script will print "The first argument is greater than 5" if the first argument passed to the script is greater than 5, and "The first argument is not greater than 5" otherwise. 

case statement: The case statement is used to execute a block of code based on a value matching a pattern. For example: 

case $1 in 
start) 
echo "Starting service..." 
# Start the service here 
;; 
stop) 
echo "Stopping service..." 
# Stop the service here 
;; 
*) 
echo "Invalid argument" 
;; 
esac 

This script will start the service if the first argument passed to the script is "start", stop the service if the first argument is "stop", and print "Invalid argument" if the first argument does not match either of those patterns. 

This is one of the most frequently asked UNIX Shell Scripting interview questions.  

Regular expressions, sometimes known as "regex," are an effective technique for finding patterns in text. In a shell script, we can use regular expressions with the grep command to search for patterns in a file or stream of input. 

Below is an example of using regular expressions with grep in a shell script: 

grep -E '^[0-9]+$' input.txt 

This command will search the input.txt file for lines that consist only of one or more digits (0-9). The -E option tells grep to use extended regular expressions, and the ^ and $ characters match the start and end of the line, respectively. 

We can also use regular expressions with the sed command to perform search and replace operations on a file or stream of input. Below is an example: 

sed -E 's/[0-9]+/X/g' input.txt 

This command will search the input.txt file for any sequences of one or more digits (0-9) and replace them with the letter "X". The -E option tells sed to use extended regular expressions, and the s/[0-9]+/X/g expression tells sed to perform a global (g) search and replace operation, replacing all occurrences of one or more digits with "X". 

In a shell script, we can use the if statement to execute a block of commands based on the value of a particular expression. The syntax for an if statement is: 

if condition 
then 
command1 
command2 
... 
fi 

We can also use the else and elif (else-if) clauses to specify additional conditions and blocks of commands to execute. 

Below is an illustration of an if statement that checks to see if a file is present:  

#!/bin/bash 
if [ -f "/path/to/file" ] 
then 
echo "File exists" 
else 
echo "File does not exist" 
fi 

In this example, the [ -f "/path/to/file" ] is a test that checks whether the file at the specified path exists and is a regular file. If the test evaluates to true, the echo command will be executed; if it evaluates to false, the else block will be executed. 

We can also use the case statement to execute a block of commands based on the value of a particular expression. The syntax for a case statement is: 

case expression in 
pattern1) 
command1 
command2 
... 
;; 
pattern2) 
command1 
command2 
... 
;; 
... 
esac 

Below is an example of a case statement that tests the value of a variable: 

#!/bin/bash 
case $VAR in 
abc) 
echo "VAR is set to 'abc'" 
;; 
def) 
echo "VAR is set to 'def'" 
;; 
*) 
echo "VAR is set to something else" 
;; 
esac 

In this example, the case statement will execute the appropriate block of commands based on the value of the $VAR variable. The * pattern is a catch-all that will match any value not matched by the other patterns. 

Errors and exceptions in a shell script can be handled using the following methods:

exit status: Every command in a shell script returns an exit status. An exit status of zero indicates success, while a non-zero exit status indicates failure. We can use the $? variable to check the exit status of a command and take appropriate action based on the status. For example: 

#!/bin/bash 
# run a command 
some_command 
# check its exit status 
if [ $? -eq 0 ]; then 
echo "Command succeeded" 
else 
echo "Command failed" 
fi 

try-catch blocks: Bash version 4.0 and above support the try and throw statements for handling errors and exceptions. The try block contains the code that might throw an exception, and the catch block contains the code to handle the exception. Below is an example: 

#!/bin/bash 
try() { 
# code that might throw an exception 
some_command 
# throw an exception if the command failed 
if [ $? -ne 0 ]; then 
throw "Exception: some_command failed" 
fi 
} 
catch() { 
# code to handle the exception 
echo $1 
} 
# run the try block 
try 
# if an exception was thrown, run the catch block 
if [ $? -eq 0 ]; then 
echo "try block succeeded" 
else 
echo "try block failed" 
fi 

trap: The trap command allows us to specify a command to run when a particular signal is received by the script. We can use the trap command to handle exceptions and errors in our script. For example: 

#!/bin/bash 
# define the trap function 
trap 'echo "Error: command failed" >&2; exit 1' ERR 
# run a command that might fail 
some_command 
# remove the trap 
trap - ERR 
echo "Command succeeded" 

set -e: We can use the set -e option to make the script exit immediately if any command returns a non-zero exit status. This can be useful for handling errors and exceptions in your script. For example: 

#!/bin/bash 
# set the -e option 
set -e 
# run a command that might fail 
some_command 
echo "Command succeeded" 

Expect to come across this popular question in UNIX Scripting interviews.  

There are many ways to work with files and directories in a shell script. Here are a few common ones: 

Listing Files and Directories: We can use the ls command to list the files and directories in a directory. For example, to list all the files and directories in the current directory, we can use: 

#!/bin/bash 
Ls 

We can use various options with the ls command to customize the output. For example, to list the files and directories in a long format, sorted by modification time, we can use: 

#!/bin/bash 
ls -lt 

Changing Directories: We can use the cd command to change the current working directory.  

For example, to change the current directory to /etc, we can use: 

#!/bin/bash 
cd /etc 

Reading from Files: We can use the cat command to print the contents of a file to the terminal. For example, to print the contents of a file myfile.txt, we can use: 

#!/bin/bash 
cat myfile.txt 

We can also use the more and less commands to view the contents of a file, which allows to page through the file. 

Writing to Files: We can use the echo command to write text to a file. For example, to write the text "Hello, World!" to a file myfile.txt, you can use: 

#!/bin/bash 
echo "Hello, World!" > myfile.txt 

We can also use the tee command to write the output of a command to a file, while still printing it to the terminal. For example: 

#!/bin/bash 
some_command | tee myfile.txt 

Copying Files: We can use the cp command to copy a file. For example, to copy a file src.txt to dest.txt, we can use: 

#!/bin/bash 
cp src.txt dest.txt 

We can also use the cp command to copy directories. For example, to copy a directory src to dest, we can use: 

#!/bin/bash 
cp -r src dest 

Moving and Renaming Files: We can use the mv command to move or rename a file. For example, to rename a file src.txt to dest.txt, we can use: 

#!/bin/bash 
mv src.txt dest.txt 

We can also use the mv command to move a file to a different directory. For example, to move a file src.txt to the /tmp directory: 

#!/bin/bash 
mv src.txt /tmp 

Removing Files: We can use the rm command to remove a file. For example, to remove a file myfile.txt: 

#!/bin/bash 
rm myfile.txt 

We can also use the rm command to remove directories. 

Here are a few common pitfalls to watch out for when writing shell scripts: 

  • Not Using set -e: The set -e option tells the script to exit immediately if any command returns a non-zero exit status. This can be very useful for handling errors and exceptions in the script. However, if we forget to set this option, our script may continue to run even if a command has failed, which can lead to unexpected behavior. 
  • Not Checking Exit Status: Every command in a shell script returns an exit status. It is important to check the exit status of a command and take appropriate action based on the status. For example, we should check the exit status of a command that might fail, and exit the script or raise an error if the status is non-zero. 
  • Not Quoting Variables: It is a good practice to always quote variables in shell scripts, to avoid issues with spaces and special characters in the values of the variables. For example, instead of echo $var, we should use echo "$var". 
  • Not Handling Input: If our script takes input from the user or from another command, we should validate and sanitize the input to avoid issues such as command injection attacks. 
  • Not Testing Thoroughly: It is important to thoroughly test our script to make sure it is working as expected. This includes testing the script with different inputs, edge cases, and error conditions. 
  • Not Using Version Control: If we are working on a larger or collaborative project, it is a good idea to use version control to track changes to our script. This will allow us to easily revert to previous versions of the script if needed and collaborate with others on the project. 

To pass arguments to a shell script, we can simply list them after the script name, separated by space. For example: 

./myscript.sh arg1 arg2 arg3 

Inside the script, we can access the arguments using the $1, $2, $3, etc. variables. For example: 

#!/bin/bash 
echo "Argument 1: $1" 
echo "Argument 2: $2" 
echo "Argument 3: $3" 

We can also use the $* and $@ variables to access all of the arguments as a single string or as separate strings, respectively. 

For example: 

#!/bin/bash 
# Print all arguments as a single string 
echo "All arguments: $*" 
# Print all arguments as separate strings 
echo "All arguments: $@" 

We can also use the $# variable to get the total number of arguments passed to the script. 

For example: 

#!/bin/bash 
echo "Total number of arguments: $#" 

In Unix and Unix-like operating systems, a file or directory name that starts with a dot (.) is considered to be a hidden file or directory. Hidden files and directories are not normally displayed when listing the contents of a directory using commands such as ls.

To list hidden files and directories, we can use the ls -a command, which will show all files and directories, including hidden ones.

The dot (.) at the beginning of a file name has no special meaning to the operating system, but it is often used by convention to indicate a configuration file or other file that is meant to be hidden from normal view. For example, the file .bashrc in a user's home directory is a configuration file for the Bash shell, and the file .gitignore is used to specify files that should be ignored by Git.

Keep in mind that the dot (.) is also used to represent the current directory and the parent directory in a file path. For example,./myfile refers to the file myfile in the current directory, and ../myfile refers to the file myfile in the parent directory.

The printf command is an alternative to echo that is available in Unix and Unix-like operating systems. Like echo, printf is used to output text to the console or to a file. 

One advantage of printf over echo is that it is more flexible and can be used to print text in a specific format. It can also handle escape sequences and special characters more reliably than echo. 

Here is an example of how printf can be used: 

printf "Hello, %s\n" "world" 

This will print the string "Hello, world" followed by a newline character. The %s is a placeholder for a string, and the \n is an escape sequence that represents a newline. 

We can use other placeholders to print different types of data, such as integers (%d), floating point numbers (%f), and more. 

For example: 

printf "Number: %d\n" 42 
printf "Float: %.2f\n" 3.1415 

This will print "Number: 42" and "Float: 3.14", respectively. 

We should keep in mind that printf is not available in all shells and may not be available on all systems. However, it is a useful tool to have in your toolkit when working with shell scripts. 

To find out how long a Unix or Linux system has been running, we can use the uptime command. This command will show the current time, the length of time the system has been running, the number of users logged in, and the load average over the last 1, 5, and 15 minutes.

For example: 

$ uptime 
 21:52:06 up 3 days, 6:22, 2 users, load average: 0.00, 0.00, 0.00 

In this example, the system has been running for 3 days and 6 hours (3 days, 6:22). 

We can also use the who -b command to show the time that the system was last booted. For example: 

$ who -b 
system boot 2021-07-14 21:50 

This will show the date and time that the system was last booted. 

Keep in mind that these commands may not work on all systems, and the output may vary depending on the specific operating system and version. 

A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Shell Scripting interview questions.  

To find out which shells are available on the Unix or Linux system, we can use the cat command to display the contents of the /etc/shells file. This file contains a list of all the shells that are available on the system. 

For example: 

$ cat /etc/shells 
# /etc/shells: valid login shells 
/bin/sh 
/bin/bash 
/usr/bin/sh 
/usr/bin/bash 
/usr/local/bin/bash 

We can also use chsh (change shell) command to see which shells are available and to change the default shell for the user account. For example: 

$ chsh 
Changing the login shell for username 
Enter the new value, or press ENTER for the default 
        Login Shell [/bin/bash]: 

This will display a list of available shells, and we can choose one from the list or enter the path to a different shell.

We should know that the list of available shells may vary depending on the specific operating system and version, and may include other shells such as zsh, csh, and more.

There are several commands available in Unix and Unix-like operating systems to check the disk usage of a file system. Some of the most common ones are:

df: The df command displays information about the available and used disk space on a file system. By default, it shows the size, used space, and available space for all file systems. 

For example: 

$ df 
Filesystem 1K-blocks Used Available Use% Mounted on 
/dev/sda1 469059992 415097728 53962256 89% / 

We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB. 

For example: 

$ df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/sda1 450G 391G 51G 89% / 

du: The du command displays the disk usage of individual directories and files. By default, it shows the sizes of directories and their contents in blocks. 

For example: 

$ du 
16 ./dir1 
8 ./dir1/file1 
4 ./dir1/file2 
4 ./dir2 
4 ./dir2/file1 
8 . 

We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB. 

For example: 

$ du -h 
16K ./dir1 
8.0K ./dir1/file1 
4.0K ./dir1/file2 
4.0K ./dir2 
4.0K ./dir2/file1 
8.0K . 

We can also use the -c option to show the total size of all directories and files. 

For example: 

$ du -ch 
16K ./dir1 
8.0K ./dir1/file1 
4.0K ./dir1/file2 
4.0K ./dir2 
4.0K ./dir2/file1 
8.0K . 
32K total 

ncdu is a text-based disk usage viewer that allows you to navigate through directories and see the sizes of individual files and directories in real-time. It is a useful tool for finding and deleting large files and directories to free up space on your file system. 

To use ncdu, we need to run the ncdu command and navigate through the directories using the arrow keys and the enter key. Pressing d will delete a file or directory and pressing q will exit the program. 

We should know that these are just a few examples of the commands available to check disk usage. There are many other tools and options available, depending on specific needs and preferences.

It's no surprise that this one pops up often in Shell Scripting interview questions for DevOps.

Awk is a command-line utility that allows you to perform operations on a stream of text data, such as extracting and transforming the contents of a file or generating reports from data. It is particularly useful for working with structured data, such as tab-separated values (TSV) or comma-separated values (CSV).

Below is a brief overview of how awk works: 

Awk reads input one line at a time and splits each line into fields based on a predefined separator (by default, this is a whitespace character). 

The fields can then be accessed and manipulated using special variables (e.g., $1 refers to the first field, $2 to the second field, etc.). 

awk processes each line according to a set of rules, which specify the actions to be taken based on the contents of the fields. 

The modified lines are then printed to the output. 

Here is an example of using awk to print the second and fifth fields of a file containing tab-separated values: 

//code 
awk -F'\t' '{print $2, $5}' input.txt > output.txt 

This command reads the contents of input.txt, sets the field separator to a tab character (-F'\t'), and then prints the second and fifth fields of each line to output.txt. 

Awk is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex operations.  

Practice more Shell Scripting interview questions and answers like this to make a lasting impression on your recruiters.

UNIX provides several security provisions for protecting files and data: 

  • File Permissions: File permissions control who is allowed to read, write, or execute a file. File permissions can be set for the owner of the file, the group that the file belongs to, and other users (also known as "others" or "world"). 
  • Access Control Lists (ACLs): ACLs are an extended version of file permissions that allow us to specify more fine-grained access controls for a file. With ACLs, we can specify permissions for specific users or groups, rather than just the owner, group, and others. 
  • Encryption: Encrypting a file or data ensures that it can only be read by someone with the appropriate decryption key. This can be useful for protecting sensitive information, such as passwords or financial data. 

It is important to note that these security provisions are not mutually exclusive, and we can use a combination of these measures to protect our files and data. 

sed is a command-line utility that allows you to perform text transformations on an input stream (a file or input from a pipeline). It is commonly used for extracting part of a file, transforming the contents of a file, or deleting lines from a file. 

Here is a brief overview of how sed works: 

sed reads input one line at a time and performs the specified transformation on each line. 

The transformations are specified using a set of commands, which are provided as arguments to sed. 

The modified lines are then printed to the output. 

Here is an example of using sed to replace all occurrences of the word "apple" with the word "banana" in a file: 

//code 
sed 's/apple/banana/g' input.txt > output.txt 

This command reads the contents of input.txt, performs a substitution on each line to replace "apple" with "banana", and writes the modified lines to output.txt. The g at the end of the substitution command specifies that the substitution should be performed globally on each line (i.e., all occurrences of "apple" should be replaced, not just the first one). 

sed is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex transformations. 

Bash is a weakly typed language because it does not require variables to be explicitly declared with a specific data type. Instead, variables in bash are automatically interpreted based on the context in which they are used. This means that we can use a variable in one part of the script as a string, and then use it in another part of the script as an integer, for example.

This can be both a strength and a weakness of bash. On one hand, it makes it easy to use variables and write scripts quickly, as we don't have to worry about declaring the data types of the variables. On the other hand, it can also make it easy to introduce bugs into our scripts, as we might not realize that a variable is being interpreted differently than we intended.

Overall, the weakly typed nature of bash is something that we should be aware of as we write scripts, and we should ensure that the variables are being used in the way that we intended.

In Unix-like systems, the pipe operator (|) is used to redirect the output of one command to the input of another command. It allows us to chain multiple commands together and process the output of one command as the input of the next command. 

For example, consider the following command: 

ls -l | grep "foo" 

This command will list the files in the current directory using the ls command, and then filter the output to display only the lines that contain the string "foo" using the grep command. The output of the ls command is piped to the grep command using the | operator, allowing the grep command to process the output of the ls command as its input. 

We can use multiple pipe operators to chain together multiple commands. For example: 

cat file.txt | grep "foo" | sort | uniq -c 

This command will display the contents of the file "file.txt" using the cat command, filter the output to display only the lines that contain string "foo" using the grep command, sort the output alphabetically using the sort command, and count the number of occurrences of each unique line using the uniq command with the -c option. 

Overall, the pipe operator allows us to run several commands in one line by chaining them together and processing the output of one command as the input of the next. This can be a powerful and efficient way to manipulate data and perform complex tasks on the command line. 

Advanced

One of the most frequently posed Shell Scripting interview questions, be ready for it.  

In a shell script, you can use the kill command to send a signal to a process. The kill command takes two arguments: the process ID of the process that we want to signal and the signal that we want to send. 

For example, the following script sends the SIGINT signal (which is equivalent to pressing Ctrl+C) to the process with the ID 12345: 

#!/bin/bash 
kill -SIGINT 12345 

We can also use the killall command to send a signal to all processes with a particular name. For example, the following script sends SIGKILL signal (which forcibly terminates the process) to all processes with the name "foo": 

#!/bin/bash 
killall -SIGKILL foo 

We can use the ps command to list the processes running on the system, and the grep command to filter the output to only show processes with a particular name. For example, the following script lists all processes with the name "foo": 

#!/bin/bash 
ps aux | grep foo 

We can also use the wait command to wait for a process to complete before continuing with the rest of the script. The wait command takes the process ID of the process that we want to wait for as an argument. For example: 

#!/bin/bash 
foo & 
pid=$! 
wait $pid 
echo "The foo process has completed" 

The `&` symbol at the end of the `foo` command runs the process in the background. The `pid` variable stores the process ID of the `foo` process. The `wait` command then waits for the `foo` process to complete before continuing with the rest of the script. 

In a shell script, we can use processes and signals to manage and control the execution of our script. By using commands like `kill`, `killall`, and `wait`, we can manage the processes running on our system and ensure that the script executes correctly. 

Here are some tips for optimizing the performance of a shell script: 

  • Avoiding Unnecessary Use of External Commands: Calling external commands (e.g., using ls, grep, awk, etc.) can be time-consuming, so we can use built-in shell commands or shell script functions whenever possible. 
  • Use Arrays and Associative Arrays: Storing data in arrays or associative arrays can be more efficient than using external commands to process data. 
  • Use Parameter Expansion: Using parameter expansion (e.g., ${var:-default}) can be more efficient than using external commands or conditional statements to set default values for variables. 
  • Use for Loops Instead of While Loops: for loops are generally more efficient than while loops, especially when the loop iterates a fixed number of times. 
  • Use Break and Continue to Control Loops: Using break and continue to control loops can be more efficient than using conditional statements. 
  • Avoid Using cat to read files: Reading a file using cat and piping the output to another command can be slow. Instead, use while read or for line in to read the file one line at a time. 
  • Use exit to exit a Script: Using exit to exit a script is generally more efficient than using return. 
  • Avoid Unnecessary Use of Quotes: Quotes are used to prevent the shell from interpreting special characters, but they can also slow down the script. Avoid using quotes unless necessary. 
  • Use Time to Measure Script Performance: Use the time command to measure the performance of different sections of your script and identify areas that may need optimization. 

A staple in Shell Scripting interview questions, be prepared to answer this one.  

Shell scripts are often used to automate tasks on a Unix or Linux system. They can be used to configure systems and applications, perform maintenance tasks, and more. 

  • Installing and Configuring Software: Shell scripts can be used to automate the installation and configuration of software on a system. For example, a script could download the necessary files, run any installation commands, and set any necessary configuration options. 
  • Setting up System Environments: Shell scripts can be used to set up and configure system environments, such as creating users and groups, setting up file permissions, and more. 
  • Automating System Maintenance Tasks: Shell scripts can be used to automate tasks such as backups, log rotations, and more. 
  • Running System Commands: Shell scripts can be used to run multiple commands or scripts in sequence, making it easier to automate tasks that involve multiple steps. 
  • Monitoring Systems and Applications: Shell scripts can be used to monitor systems and applications, alerting administrators to problems or triggering actions in response to certain events. 

Shell scripts can be used to automate tasks related to working with containers and container orchestration tools like Docker and Kubernetes. Below are a few examples of how shell scripts can be used in this context: 

  • Building and Pushing Docker Images: Shell scripts can be used to build Docker images and push them to a registry. This can be useful for automating the build and deployment process for applications that are run in containers. 
  • Starting and Stopping Docker Containers: Shell scripts can be used to start and stop Docker containers, either individually or in groups. This can be useful for automating the management of containerized applications. 
  • Deploying Applications to Kubernetes: Shell scripts can be used to automate the process of deploying applications to a Kubernetes cluster. This can involve building and pushing Docker images, creating, and applying Kubernetes manifests, and more. 
  • Managing Kubernetes Resources: Shell scripts can be used to automate the management of Kubernetes resources, such as pods, services, and deployments. This can involve tasks such as scaling up or down, rolling out updates, and more. 
  • Monitoring and Logging: Shell scripts can be used to monitor the status of containers and containerized applications, as well as gather and process log data. This can be useful for debugging issues and keeping track of application performance. 

To manipulate and manage processes with shell scripts, you can use the command-line utilities that are available on your system. Some common utilities for managing processes include: 

  • ps: Displays information about running processes 
  • top: Displays real-time information about running processes 
  • kill: Sends a signal to a process to terminate it 
  • pkill: Terminates processes based on their name 
  • pstree: Displays a tree of processes 

Here is an example of how you might use these utilities in a shell script to list, kill, and then confirm the termination of a process: 

#!/bin/bash 
# List the process 
ps aux | grep process-name 
# Kill the process 
kill $(ps aux | grep process-name | awk '{print $2}') 
# Confirm that the process has been terminated 
if [ $(ps aux | grep process-name | wc -l) -eq 0 ] 
then 
echo "Process terminated successfully" 
else 
echo "Error: process not terminated" 
Fi 

We can also use other utilities, such as systemctl or service, to start, stop, and manage system services. For example: 

# Start a service 
systemctl start service-name 
# Stop a service 
systemctl stop service-name 
# Restart a service 
systemctl restart service-name 

Shell scripts can be used to integrate with other tools or systems in a variety of ways. Some common ways to use shell scripts for integration include: 

  • Running Other Command-line Tools or Utilities: Shell scripts can be used to execute other command-line tools or utilities as part of an automated workflow. For example, we might use a shell script to automate the process of running a database backup, by calling the appropriate command-line utility to perform the backup. 
  • Wrapping API Calls: Shell scripts can be used to make API calls to other systems or services, as long as the API is accessible over the network. This can be useful for integrating with cloud services or other web-based systems. 
  • Building and Deploying Software: Shell scripts can be used to automate the process of building and deploying software, by executing the necessary commands to check out the code from version control, build the software, and then deploy it to the appropriate servers. 
  • Running System Maintenance Tasks: Shell scripts can be used to automate routine system maintenance tasks, such as cleaning up log files or checking for system updates. 

Overall, shell scripts are a powerful tool for automating and integrating a wide variety of tasks and processes within a larger ecosystem. 

There are many ways to use shell scripts to automate tasks such as deployment or testing. Here are a few examples: 

Deployment: Shell scripts can be used to automate the process of deploying software to various environments. For example, we might use a shell script to perform the following tasks as part of a deployment process: 

  1. a. Check out the latest code from version control.
  2. Build the software.
  3. Run tests to ensure that the software is working correctly.
  4. Package the software into a deployable package (e.g., a jar file or a docker container)
  5. Deploy the package to the appropriate servers.

Testing: Shell scripts can be used to automate the process of running tests against software. For example, you might use a shell script to perform the following tasks as part of a testing process: 

  1. Check out the latest code from version control
  2. Build the software
  3. Run unit tests to ensure that individual components are working correctly
  4. Run integration tests to ensure that the software works correctly when all of the components are combined
  5. Run acceptance tests to ensure that the software meets the required specifications
  6. Overall, shell scripts are a powerful tool for automating tasks such as deployment and testing, as they allow you to define a set of steps that can be executed automatically, saving time, and reducing the risk of errors.

This question is a regular feature in Shell Scripting interview questions for DevOps, be ready to tackle it.  

Below are several ways to use shell scripts to interact with APIs and other external services. Below are a few examples: 

curl: The curl command can be used to send HTTP requests to an API and receive the response. For example: 

curl https://api.example.com/endpoint 

jq: The jq command is a tool for parsing and manipulating JSON data. It can be used in combination with curl to extract specific values from the API response. For example: 

curl https://api.example.com/endpoint | jq .key 

wget: The wget command can be used to download files from the web, including files returned by an API. For example: 

wget https://api.example.com/endpoint/file.zip 

grep: The grep command can be used to search for specific patterns in text. It can be used in combination with other commands, such as curl, to extract specific information from the output. For example: 

curl https://api.example.com/endpoint | grep pattern 

There are many other tools and techniques that can be used to interact with APIs and other external services from a shell script. 

Shell scripts can be used to manage and manipulate files and directories in a variety of ways. Some common tasks that can be performed using shell scripts include: 

  • Creating and Deleting Files and Directories: We can use shell commands such as touch, mkdir, rm, and rmdir to create and delete files and directories from within a shell script. 
  • Copying and Moving Files and Directories: Shell commands such as cp and mv can be used to copy and move files and directories from within a shell script. 
  • Renaming Files and Directories: We can use the mv command to rename files and directories from within a shell script. 
  • Searching for Files and Directories: Can use the find command to search for files and directories based on various criteria, such as their name, size, or modification date. 
  • Archiving and Compressing Files: Use the tar command to create archive files containing multiple files and directories, and the gzip and bzip2 commands to compress files. 

Overall, shell scripts provide a powerful set of tools for managing and manipulating files and directories on a system, allowing us to automate tasks such as backups, file cleanup, and more. 

To check if a file exists on the filesystem using a bash shell script, we can use the test command with the -f option. Below is an example of how we might use this command to check if a file called "file.txt" exists:

if test -f "file.txt"; then 
echo "File exists" 
else 
echo "File does not exist" 
fi 

Alternatively, we can use the [ -f "file.txt" ] syntax to achieve the same result. 

Below is an example of how we might use this command in a script: 

#!/bin/bash 
if [ -f "file.txt" ]; then 
echo "File exists" 
else 
echo "File does not exist" 
fi 

This script will check for the existence of a file called "file.txt" and will print "File exists" if the file exists or "File does not exist" if the file does not exist. 

We can also use the -d option to check for the existence of a directory or the -e option to check for the existence of either a file or a directory. This is also one of the most frequently asked Shell Scripting questions.  

The difference between [[ $string == "efg*" ]] and [[ $string == efg* ]] is the presence of double quotes around the string being compared. 

In the first example, [[ $string == "efg*" ]], the double quotes around "efg*" indicate that the string should be treated literally, and the * character should be interpreted as a literal asterisk. This means that the expression will only evaluate to true if the value of $string is exactly "efg*". 

In the second example, [[ $string == efg* ]], the double quotes are not present, which means that the * character will be interpreted as a wildcard. This means that the expression will evaluate to true if the value of $string starts with "efg" followed by any number of characters. 

For example: 

string="efgabc" 
if [[ $string == "efg*" ]]; then 
echo "Match with double quotes" 
else 
echo "No match with double quotes" 
fi 
if [[ $string == efg* ]]; then 
echo "Match without double quotes" 
else 
echo "No match without double quotes" 
fi 

The output of this script would be: 

No match with double quotes 
Match without double quotes 

Overall, the use of double quotes can be important in bash to ensure that strings are treated literally, and to prevent special characters from being interpreted as wildcards or other special symbols. 

The crontab command is used to schedule tasks to be executed automatically at a specified time. When we use the crontab command to schedule a task, the task is stored in one of two files: 

  • The system-wide crontab File: This file is usually located at /etc/crontab, and it contains tasks that are executed by the system on behalf of all users. This file is usually reserved for tasks that need to be run with superuser privileges. 
  • The user crontab File: Each user on a system has their own crontab file, which is stored in the /var/spool/cron directory. This file contains tasks that are specific to the individual user, and that are executed with the permissions of that user. 

To view the tasks in user crontab file, we can use the crontab -l command. To edit the user crontab file, we can use the crontab -e command. To remove the user crontab file, we can use the crontab -r command. 

To schedule a task using the crontab command, we need to specify the time and date when the task should be executed, as well as the command that should be executed. The time and date are specified using a special syntax called the "crontab format," which consists of five fields: minute, hour, day of month, month, and day of week. Each field can contain a single value, a list of values, or a range of values, separated by commas. 

For example, to schedule a task to be executed every hour, you might use a crontab entry like this: 

0 * * * * /path/to/command 

This entry specifies that the task should be run every hour, at the top of the hour (when the minute field is 0). 

To count the number of words in a given file using a bash shell script, we can use the wc command with the -w option. 

Here is an example of a command sequence that we can use to count the words in a file called "file.txt": 

# Count the number of words in the file 
word_count=$(wc -w "file.txt") 
# Print the result 
echo "Number of words: $word_count" 

This script will execute the wc command with the -w option, which counts the number of words in the file. The output of the wc command will be captured by the $(...) syntax and stored in the word_count variable. The script will then print the value of the word_count variable using the echo command. 

The wc command can also be used to count the number of lines in a file (using the -l option) or the number of bytes in a file (using the -c option). 

For example, to count the number of lines in a file called "file.txt", we could use the following script: 

# Count the number of lines in the file 
line_count=$(wc -l "file.txt") 
# Print the result 
echo "Number of lines: $line_count" 

And to count the number of bytes in a file called "file.txt", you could use the following script: 

# Count the number of bytes in the file 
byte_count=$(wc -c "file.txt") 
# Print the result 
echo "Number of bytes: $byte_count"

In Unix-like operating systems, the "s" permission bit is used to set the setuid (or setgid) permission on a file. When the setuid permission is set on a file, it allows the file to be executed with the permissions of the owner of the file, rather than the permissions of the user who is executing the file. This can be useful for allowing users to execute a program with superuser privileges, even if they do not have the necessary permissions to run the program directly.

For example, considering a file called "sudo", which is owned by the root user and has the setuid permission set. If a user executes this file, it will be run with the permissions of the root user, even if the user does not have root permissions themselves. This can be useful for allowing users to execute commands that require superuser privileges, such as installing software or modifying system settings.

The setgid permission works in a similar way, but it allows the file to be executed with the permissions of the group owner of the file, rather than the permissions of the user who is executing the file.

The setuid and setgid permissions are represented by the "s" permission bit in the file's permissions string. For example, if a file has the permissions "rwxsr-xr-x", the "s" permission bits indicate that the setuid and setgid permissions are set on the file.

It is important to use the setuid and setgid permissions carefully, as they can be a security risk if used improperly. In particular, it is important to make sure that setuid and setgid programs are carefully designed and implemented, as vulnerabilities in these programs can be exploited to gain unauthorized access to the system.

Expect to come across this popular question in UNIX Scripting interviews.  

To create a directory with the desired permissions, I can use the mkdir command and specify the -m option to set the permissions of the directory. 

To allow anyone in the group to create and access files in the directory, but not delete files created by others, I can use the following permissions: 

  • rwx for the owner (read, write, and execute permissions) 
  • rwx for the group (read, write, and execute permissions) 
  • r-x for others (read and execute permissions, but no write permission) 
  • I can specify these permissions using the octal notation, where 7 represents read, write, and execute permissions, 6 represents read and write permissions, and 5 represents read and execute permissions. 

To create the directory with these permissions, I will use the following command: 

mkdir -m 775 /path/to/directory 

This will create the directory with the permissions rwxrwxr-x, which will allow the owner and members of the group to create and access files in the directory but will prevent others from deleting files created by others. 

It is also important to note that setgid permission can be used to ensure that new files created in the directory are automatically owned by the group, rather than the user who created the file. To set the setgid permission on the directory, I will use the chmod command with the g+s option: 

chmod g+s /path/to/directory 

will set the setgid permission on the directory, ensuring that new files created in the directory are automatically owned by the group. 

To monitor a log file that is constantly updating, we can use the tail command with the -f option. The tail command is used to display the last few lines of a file, and the -f option allows it to follow the file and display new lines as they are added to the file. 

For example, to monitor the log file "log.txt" and display new lines as they are added to the file, we can use the following command: 

tail -f log.txt 

This will display the last few lines of the log file, and then continue to display new lines as they are added to the file. The tail command will keep running until we stop it, so we can use it to effectively monitor the log file in real-time. 

We can also use the -n option to specify the number of lines to display, or the -s option to specify the interval between updates. 

For example, to display the last 100 lines of the log file and update the display every 5 seconds, we can use the following command: 

tail -n 100 -s 5 log.txt 

Overall, the tail command is a useful tool for monitoring log files and other files that are constantly updating. It allows us to view new lines as they are added to the file, making it easier to track changes and identify issues. 

To set a connection to a remote server where we can execute commands, we'll need the following methods.

  • SSH (Secure Shell) is a network protocol that allows us to securely connect to a distant server and issue commands. To use SSH, we will need to install an SSH client on the local machine and connect to the server using the ssh command. For example: 
  • ssh user@server: This will open an SSH session to the server, allowing us to issue commands and interact with the server as if we were logged in directly. 
  • Telnet is a network protocol that allows us to connect to a distant server and issue commands. To use Telnet, we will need to install a Telnet client on a local machine and connect to the server using the telnet command. For example: 
  • Telnet Server will open a Telnet session to the server, letting us issue commands and interact with the server as if we were logged in directly. 
  • Remote Desktop: If the distant server is running a graphical desktop environment, we can use a remote desktop protocol such as VNC (Virtual Network Computing) or RDP (Remote Desktop Protocol) to connect to the server and interact with it as if we were sitting in front of the machine. To use a remote desktop protocol, we will need to install a remote desktop client on the local machine and connect to the server using the appropriate protocol. 

Overall, the choice of which method to use to connect to a distant server will depend on your specific needs and the capabilities of the server. SSH and Telnet are commonly used for command-line access to servers, while remote desktop protocols are more suitable for interacting with a graphical desktop environment. 

A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Scripting interview questions.

To find the number of lines in a file that contains the word "LINUX" using a bash shell script, we can use the grep command with the -c option. 

Below is an example of a script that you can use to find the number of lines in a file called "file.txt" that contain the word "LINUX": 

# Find the count of lines containing "LINUX" in the file 
line_count=$(grep -c "LINUX" "file.txt") 
# Print the result 

echo "Number of lines containing LINUX: $line_count" 

This script will execute the grep command with the -c option, which searches for the specified pattern ("LINUX") in the file and counts the number of lines that match the pattern. The output of the grep command will be captured by the $(...) syntax and stored in the line_count variable. The script will then print the value of the line_count variable using the echo command. 

We can also use the -i option to ignore case when searching for the pattern, or the -w option to match only whole words. 

For example, to search for the pattern "linux" regardless of case, we can use the following script: 

# Find the count of lines containing "linux" in the file, ignoring case 
line_count=$(grep -ci "linux" "file.txt") 
# Print the result 
echo "Number of lines containing linux: $line_count" 

These types of questions are shell scripting scenario-based interview questions and generally asked with experienced candidates. 

To print a list of every user's login names on a Unix-like system, we can use the cut and sort commands to extract the login names from the /etc/passwd file and sort them alphabetically. 

Below is an example of a command sequence that we can use to print a list of login names: 

# Extract the login names from the /etc/passwd file 
login_names=$(cut -d: -f1 /etc/passwd) 
# Sort the login names alphabetically 
sorted_login_names=$(echo "$login_names" | sort) 
# Print the login names 
echo "$sorted_login_names" 

This script will use the cut command to extract the first field (the login name) from each line of the /etc/passwd file, using the : character as the delimiter. The output of the cut command will be stored in the login_names variable. 

The script will then use the sort of command to sort the login names alphabetically, and store the sorted list in the sorted_login_names variable. Finally, the script will use the echo command to print the login names. 

The /etc/passwd file is a system file that contains information about the users on the system, including their login names, home directories, and other details. Each line of the file represents a single user, and the fields are separated by: characters.  

There are several ways to send mail using a shell script. One option is to use the mail command, which is a command-line utility for sending and receiving mail.

To use the mail command to send a message, you can use the following syntax: 

echo "message" | mail -s "subject" recipient@example.com 

This will send a message with the specified subject to the specified recipient. The message can be entered directly on the command line, or it can be piped to the mail command from another command. 

For example, to send a message with the subject "Hello" to the recipient "user@example.com", you can use the following command: 

echo "Hello, how are you?" | mail -s "Hello" user@example.com 

We can also use the -a option to attach a file to the message or the -c option to specify a carbon copy recipient. 

For example, to send a message with the subject "Hello" to the recipient "user@example.com", with a carbon copy to "cc@example.com" and an attachment "attachment.txt", we can use the following command: 

echo "Hello, here is the attachment you requested." | mail -s "Hello" -a "attachment.txt" -c cc@example.com user@example.com 

A common question in Shell Scripting interviews, don't miss this one.  

Description

Top Shell Scripting Tips and Tricks for Programmers

  • Use Comments in Scripts At All Times 

This is a suggested technique that applies to all programming languages and not only shell scripting. Making notes in a screenplay makes it easier for you or someone else reading it to comprehend what each section of the script does. To begin with, the # symbol defines comments.

  • Clean Code 

Declare every single one of the global variables first, followed by every single one of the functions. Use local variables when writing functions and put the main body of the sentence after the functions. In your routines, the if statement, and at the conclusion of the script, use an explicit exit status code.

  • Using Some Trap for Unexpected Termination 

While your script is running, users can stop it by using Ctrl-c. You must restore a directory or file to its original condition if your script modified them. This circumstance calls for the command trap.

Ctrl-c produces the SIGINT signal whenever it is used by a user. SIGTERM signals are sent when a user stops a process.

  • Exit Codes 

Any Unix command that returns control to its parent process gives an exit code, which is an integer between 0 and 255. 

However, these are processed modulo 256, so exit -10 is similar to exit 246, and exit 257 is equivalent to exit 1. Other integers are allowed. These can be used in shell scripts to modify the execution flow based on the success or failure of executed commands.

  • Automate GIT with Shell Script 

Automate GIT with shell script for pushing and pulling the code. Although with Jenkins, we can directly do it, we can use this way if the whole server build is not required.

  • Print Everything That you do on Terminal 

Typically, scripts alter the status of a system. But because we cannot predict when a user will send us a SIGINT or when a script fault will result in an unexpected script termination, it is helpful to print anything you are doing to the terminal so that the user may follow along without having to open the script.

How to Prepare for Shell Scripting Interview Questions?

Shell programming is heavily focused on files, texts, lines, and words, whereas Java and C++ applications often conceive in terms of an object paradigm. Instead of working with objects, shell programming connects applications using textual transformations.

Before we can connect things, you must first comprehend what they each accomplish independently. Additionally, the items are arbitrarily complicated, posing recursive problems. Both Sed and Awk, which are fully-fledged programming languages in and of themselves, make extensive use of regular expressions.

Shell scripting is not at all difficult after you have mastered (1) the notions of data streams (pipelines, standard in/out), (2) the concept of commands and command line arguments and options, and (3) (most challenging) the precise effect of so-called shell metacharacters.

As a user of a Linux-based operating system, you must be familiar with all the fundamental commands and applications that meet your computing needs. You can organize these instructions and programs using Shell Scripting (Programming) techniques to automate their execution. You can also add complexity by integrating all these commands using programming logic to address a challenging computing issue. The best certifications for programmers will help you acquire more knowledge about programming and upscale your skills. If you want to pursue a career as a Linux administrator or engineer, this course is perfect for you. 

Job Roles

  • Linux Engineer  
  • SAS Developer- Unix/Linux/Shell Scripting 
  • Automation with Unix Shell Scripting 
  • AWS/DevOps Engineer- Python/Shell Scripting 
  • Linux Admin (Shell or Python Scripting) 
  • DevOps Engineer- Unix/Shell Scripting 
  • Cloud Administrator- Kubernetes/Shell Scripting

Top Companies

  • IBM 
  • Hewlett Packard Enterprise 
  • Persistent Systems 
  • Paytm 
  • HCL 
  • Mastercard 
  • TCS 

Some Key Tips

  1. Learn Linux every day for a little while. 
  2. Practice and learn the fundamental modules and syntax. 
  3. Improve your logical thinking. 
  4. The fundamentals and basics must be at your fingertips. 
  5. Spend more time practicing and less time learning. 
  6. The secret to success in any subject is practice. 

Summary

There are several opportunities available from numerous reputable businesses worldwide. According to studies, around 17 percent of the market is occupied by Linux Shell Scripting. Since 2018, Linux has begun to significantly grow its market. The average pay for a shell programming talent is $81,951, according to PayScale. Learning shell scripting interview questions would be an excellent place to start if you want to advance your DevOps or system administrator career.

According to glassdoor.com, IBM Linux Engineer salaries - 5 salaries reported $136,407/year. The average salary for a Linux Systems Engineer is $109307 per year in the US. 

If you are determined to ace your next interview as a Linux Engineer/ DevOps Engineer, these shell-scripting interview questions and answers will fast-track your career and helps you to boost your confidence while giving interviews. To relieve you of the worry and burden of preparation for your upcoming interviews, we have compiled the above list of interview questions for Shell scripting with answers prepared by industry experts. Being well-versed with these commonly asked UNIX Shell Scripting interview questions will be your very first step toward a promising career as DevOps/Linux Engineer. 

You can opt for various job profiles after being certified as a Linux Engineer. A few are listed below: 

  • Linux engineer 
  • SAS Developer - Unix/Linux/Shell Scripting 
  • Automation with Unix Shell Scripting 
  • AWS/DevOps Engineer - Python/Shell Scripting 
  • Linux Admin (Shell or Python Scripting) 
  • DevOps Engineer - Unix/Shell Scripting 
  • Cloud Administrator - Kubernetes/Shell Scripting 

If you wish to build a career as a Linux administrator or Linux engineer, you can learn more about KnowledgeHut Programming certifications from the best training available. Crack your Shell Scripting Interview questions with ease and confidence! 

Read More
Levels