Accreditation Bodies
Accreditation Bodies
Accreditation Bodies
Supercharge your career with our Multi-Cloud Engineer Bootcamp
KNOW MOREA shell script has loop and conditional control structures that let you choose which commands to run or repeat Linux instructions. It is an effective tool for system administrators, developers, and power users since it can automate repetitive activities to save time and use efficiently. Whether you're beginner or experienced user, these shell scripting interview questions in this guide cover a range of topics, including shell commands, variables, functions, control structures. For more experienced users, advanced topics like Docker, Kubernetes as well as best practices for writing clean, efficient code. You'll learn how to use command-line tools like awk, sed, and grep to streamline your workflow. With detailed explanations for each question, this guide is a great resource for anyone who wants to gain a deeper understanding of shell scripting. It's perfect for those who are preparing for an interview or looking to take their skills to the next level.
Filter By
Clear all
A program written in a shell programming language, such as bash, csh, or ksh, is referred to as a shell script. Shell scripts are used to automate operations that are frequently carried out with a shell, such as launching a web server, configuring a development environment, or delivering software.
Shell scripts are advantageous because they enable the automation of several commands that would otherwise require human execution. This can cut down on time and error-prone work. Additionally portable, shell scripts can be executed on any machine that has a compatible shell set up.
Shell scripts are frequently used in software development, DevOps, and system management chores. They can be utilized to automate processes for maintaining infrastructure, producing, and deploying software, and conducting tests. Additionally, they can be used to automate platform- or operating-system-specific operations.
Shell scripts can be used to develop unique utilities and tools, as well as to increase the functionality of already existing tools, in addition to automating operations. They can be used to execute operations that would be challenging or time-consuming to perform manually or to automate repetitive or complex processes.
To create a shell script, I will need a text editor, such as gedit, vim, or emacs. I can also use a word processor, such as Microsoft Word or Google Docs, but have to make sure to save the file as plain text.
To create a shell script, I will follow these steps:
#!/bin/bash
chmod +x script.sh
./script.sh
I can also run the script by specifying the path to the interpreter followed by the path to the script, like this:
/bin/bash script.sh
It is a good practice to include comments in our script to explain what the different parts of the script do. In a shell script, comments are denoted by a pound sign (#). Anything following the pound sign on the same line will be treated as a comment and ignored by the shell.
For example:
# This is a comment
echo "Hello, world!" # This is also a comment
This is one of the most frequently asked UNIX Shell Scripting interview questions.
The #! (shebang) is a special line at the beginning of a shell script that tells the operating system which interpreter to use to execute the script. The shebang line consists of a pound sign (#) and an exclamation point (!) followed by the path to the interpreter executable. For example, the shebang line for a bash script might look like this:
#!/bin/bash
The shebang line is not required for all shell scripts, but it is a good practice to include it so that the script can be run directly from the command line without having to specify the interpreter. For example, if we include the shebang line in a script and make the script executable (using chmod +x script.sh), we can run the script simply by typing ./script.sh instead of bash script.sh.
Without a shebang line, the operating system will not know which interpreter to use to execute the script, and the script will not be able to run.
In a shell script, comments are lines of text that are not executed as part of the script but are included to provide documentation and explanations for the code. Comments are used to make the code easier to understand and to help other developers (or future versions of ourselves) understand what the code is doing.
To include a comment in a shell script, we can use the # character followed by the text of the comment. For example:
# This is a comment
Anything following the `#` character on the same line will be treated as a comment, so you can use comments to annotate specific lines of code:
# Increment the value of the counter variable
counter=$((counter+1))
We can also use comments to include multi-line blocks of documentation or explanations:
: '
This is a multi-line comment.
This comment block can be used to include
detailed explanations or documentation for
the code below.
'
Note that the `:` command is a shell built-in that does nothing, so it can be used to create a block of comments without actually executing any code. The single quotes around the comment block are necessary to prevent the shell from interpreting the # characters as the start of a comment.
Expect to come across this popular question in UNIX Scripting interviews.
To run a shell script, I will need to have a shell interpreter installed on my system. Common shell interpreters include bash, csh, and ksh.
To run a shell script, I will follow these steps:
I will make sure that the script is executable. I can make a script executable by running the chmod command and setting the executable flag:
chmod +x script.sh
Run the script by typing:
./script.sh
Alternatively, we can specify the path to the interpreter followed by the path to the script, like this:
/bin/bash script.sh
We can replace /bin/bash with the path to the shell interpreter that we want to use.
We can also run a shell script by passing it to the interpreter as a command-line argument:
bash script.sh
Replace `bash` with the name of the shell interpreter that we want to use.
The path to the shell interpreter we want to utilize should be specified in a shebang line at the beginning of our script. This allows us to run the script by simply typing `./script.sh`, regardless of the default shell interpreter on our system.
For example, if our script is written in bash, we can include the following shebang line at the top of the script:
#!/bin/bash
This tells the system to use the bash interpreter to run the script.
We can also specify command-line arguments when running a shell script. These arguments are passed to the script as variables and can be accessed within the script using the `$1`, `$2`, `$3`, and so on.
For example, the following script prints the first command-line argument:
#!/bin/bash
echo "The first argument is $1"
To run the script and pass it a command-line argument, we can type:
./script.sh foo
This will print "The first argument is foo".
In a shell script, we can perform input and output redirection using the < and > symbols.
For example, to redirect the output of a command to a file, we can use the > symbol followed by the name of the file. For example:
# Redirect the output of the "ls" command to a file called "directory_listing.txt"
ls > directory_listing.txt
To append the output of a command to a file, you can use the >> symbol instead of >. For example:
# Append the output of the "ls" command to the file "directory_listing.txt"
ls >> directory_listing.txt
To redirect the input of a command from a file, you can use the < symbol followed by the name of the file. For example:
# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt"
sort < unsorted_list.txt > sorted_list.txt
You can also use multiple redirections in a single command. For example:
# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt", then display the results on the screen
sort < unsorted_list.txt > sorted_list.txt | cat
In this example, the output of the sort command is redirected to the file "sorted_list.txt", and the cat command is used to display the contents of the file on the screen.
Note, that these redirections can also be used in combination with other shell commands and constructs, such as loops and conditional statements.
Below is a list of some common shell commands that you might use in a shell script:
These are just a few examples, and there are many more shell commands that we can use in a shell script. Some commands are specific to certain shells (e.g., Bash, Zsh, etc.), while others are available in most shells.
This is one of the most commonly asked shell scripting commands interview questions.
There are many different shell environments that are commonly used, including:
In a shell script, command line arguments are stored in special variables. The first argument is stored in the variable $1, the second argument is stored in the variable $2, and so on. The variable $0 contains the name of the script itself.
Below is an example of a simple script that uses command line arguments:
#!/bin/bash echo "The first argument is: $1" echo "The second argument is: $2" echo "The third argument is: $3"
To use command line options in a script, we can use the getopts built-in command. This command allows us to specify which options the script should accept, and then provides a way for the script to access the values of those options.
Here is an example of a script that uses the getopts command:
#!/bin/bash while getopts ":a:b:" opt; do case $opt in
echo "Option -a was specified with value $OPTARG" ;;
echo "Option -b was specified with value $OPTARG" ;; \?) echo "Invalid option: -$OPTARG" exit 1 ;; :) echo "Option -$OPTARG requires an argument." exit 1 ;; esac done
In this example, the script will accept two options, -a and -b, which can be followed by a value. The getopts command will parse the command line arguments and set the variables $opt and $OPTARG accordingly. The script can then use these variables to determine which options were specified and what their values were.
In a shell script, we can use the for loop to iterate over a sequence of values. The syntax for a for loop is:
for variable in list do command1 command2 ... done
Here is an example of a for loop that iterates over a range of numbers:
#!/bin/bash for i in {1..10} do echo "$i" done
This script will output the numbers 1 through 10.
You can also use the while loop to execute a block of commands repeatedly while a particular condition is true. The syntax for a while loop is:
while condition do command1 command2 ... done
Here is an example of a while loop that counts down from 10 to 1:
#!/bin/bash i=10 while [ $i -gt 0 ] do echo "$i" i=$((i-1)) done
This script will output the numbers 10 through 1.
We can also use the break and continue statements to control the flow of a loop. The break statement will cause the loop to terminate early, while the continue statement will skip the rest of the current iteration and move on to the next one.
There are several techniques you can use to debug a shell script:
There are a few situations where shell programming/scripting may not be the best choice:
That being said, shell scripts are still a very useful tool for automating a wide variety of tasks and can be a good choice in many situations. It's important to consider the specific requirements of your task and choose the right tool for the job.
The default permissions of a file when it is created depends on the umask (user mask) of the user who is creating the file. The umask is a value that determines the default permissions for newly created files and directories. It is specified in octal notation, and the permissions it specifies are subtracted from the default permissions set by the system.
For example, if the umask is set to 022 (octal), the default permissions for a newly created file will be 644 (rw-r--r--). This means that the owner of the file will have read and write permissions, and others will have read-only permissions.
If the umask is set to 002 (octal), the default permissions for a newly created file will be 664 (rw-rw-r--). This means that the owner and members of the owner's group will have read and write permissions, and others will have read-only permissions.
We can view and change the umask of our user account using the umask command. For example, to set the umask to 022 (octal), we can use the command umask 022.
Keep in mind that the default permissions of a file may also be influenced by the permissions of the directory in which the file is being created, and by any default ACLs (access control lists) that are set on the system.
There are four key elements of a Linux file system:
The kernel is the central part of a computer operating system that controls all the other parts of the system. It is responsible for managing the hardware and software resources of the system, including the CPU, memory, and input/output devices.
The kernel is the lowest level of the operating system and is responsible for interfacing with the hardware and providing a platform for other software to run on top of. It is a fundamental part of the operating system and is essential for the proper functioning of the computer.
In Linux, we can make a variable unchangeable by using the readonly builtin command in the Bash shell. This command can be used to mark a shell variable as read-only, which means that it cannot be modified or unset.
Below is an example of how to use the readonly command:
$ x=10 $ readonly x $ x=20
bash: x: readonly variable
In the example above, the variable x is first set to the value 10. The readonly command is then used to mark x as a read-only variable. Finally, an attempt is made to change the value of x to 20, but this results in an error because x is a read-only variable.
It is important to note that the readonly command only works within the Bash shell and does not affect the variables in other processes or programs. It is also specific to the shell in which it is used, so a variable marked as read-only in one shell may not be read-only in another shell.
Shell scripting has several drawbacks that you should be aware of:
Overall, shell scripting is a useful tool for automating tasks and interacting with the command line, but it may not be the best choice for larger or more complex projects. In these cases, you may want to consider using a more powerful programming language such as Python or C++.
It's no surprise that this question pops up often in Shell Scripting interviews.
There are several ways to create a shortcut in Linux, depending on what we want to do and the desktop environment we are using. Here are a few options:
It is important to note that the steps to create a shortcut may vary depending on our specific Linux distribution and desktop environment.
In Linux, shell programs (also known as shell scripts) are typically stored in files with a .sh file extension. These files can be stored in any directory on the system, but there are a few common locations that are used:
It is important to note that these directories are just conventions, and we can store the shell scripts in any directory on the system. However, using one of these directories can make it easier for users to access and run the scripts.
A common Shell Scripting interview question for DevOps, don't miss this one.
In Linux, a hard link is a type of link that points directly to the inode of a file, while a soft link (also known as a symbolic link or symlink) is a type of link that points to the file name of a file. Here are some key differences between hard links and soft links:
Hard Links
Soft Links
In a shell script, we can define a function by using the following syntax:
function function_name { commands }
For example, the following script defines a function called greet that prints a greeting:
#!/bin/bash function greet { echo "Hello, world!" } Greet
We can also define a function using the function keyword followed by the function name, like this:
#!/bin/bash function greet { echo "Hello, world!" } Greet
To call a function, simply type its name followed by a set of parentheses. For example:
#!/bin/bash function greet { echo "Hello, world!" } Greet
We can also pass arguments to a function by including them within the parentheses when calling the function. For example:
#!/bin/bash function greet { echo "Hello, $1!" }
greet John
Inside the function, the arguments are referred to as $1, $2, $3, and so on. The first argument is $1, the second is $2, and so on.
We can use functions to modularize the script and make it easier to read and maintain. Functions can be used to perform a specific task, and then called multiple times throughout the script. This can make it easier to reuse code and avoid repeating the same code multiple times.
It is one of the basic questions asked for an intermediate level linux shell scripting interview questions.
One of the most frequently posed UNIX and Shell Scripting interview questions, be ready for it.
In a shell script, we can use variables to store data and manipulate it. To define a variable, we can use the following syntax:
variable_name=value
For example, the following script defines a variable called message and assigns it the value "Hello, world!":
#!/bin/bash message="Hello, world!" echo $message
To access the value of a variable, we can use the dollar sign ($) followed by the variable name. For example:
#!/bin/bash message="Hello, world!" echo $message
We can also use variables to store the output of a command. For example:
#!/bin/bash current_directory=$(pwd) echo "The current directory is $current_directory"
We can also use variables to store the results of arithmetic expressions. For example:
#!/bin/bash x=5 y=3 result=$((x + y)) echo "The result is $result"
In a shell script, it is important to remember that variables are case-sensitive and that they must be referenced using the dollar sign ($).
It is also a good practice to use descriptive names for variables to make the code easier to read and understand, and it can be helpful in bash scripting interview questions.
Here are some common shell scripting errors and ways to troubleshoot them:
Shell scripts often use constructs such as loops and conditional statements to perform tasks. Below are some examples:
for loop: The for loop is used to iterate over a list of items. For example:
for i in 1 2 3 4 5 do echo $i done
This script will print the numbers 1 through 5 to the console.
while loop: The while loop is used to execute a block of code repeatedly while a certain condition is met. For example:
counter=0 while [ $counter -lt 5 ] do echo $counter counter=$((counter+1)) done
This script will print the numbers 0 through 4 to the console.
if statement: The if statement is used to execute a block of code if a certain condition is met.
For example:
if [ $1 -gt 5 ] then echo "The first argument is greater than 5" else echo "The first argument is not greater than 5" fi
This script will print "The first argument is greater than 5" if the first argument passed to the script is greater than 5, and "The first argument is not greater than 5" otherwise.
case statement: The case statement is used to execute a block of code based on a value matching a pattern. For example:
case $1 in start) echo "Starting service..." # Start the service here ;; stop) echo "Stopping service..." # Stop the service here ;; *) echo "Invalid argument" ;; esac
This script will start the service if the first argument passed to the script is "start", stop the service if the first argument is "stop", and print "Invalid argument" if the first argument does not match either of those patterns.
This is one of the most frequently asked UNIX Shell Scripting interview questions.
Regular expressions, sometimes known as "regex," are an effective technique for finding patterns in text. In a shell script, we can use regular expressions with the grep command to search for patterns in a file or stream of input.
Below is an example of using regular expressions with grep in a shell script:
grep -E '^[0-9]+$' input.txt
This command will search the input.txt file for lines that consist only of one or more digits (0-9). The -E option tells grep to use extended regular expressions, and the ^ and $ characters match the start and end of the line, respectively.
We can also use regular expressions with the sed command to perform search and replace operations on a file or stream of input. Below is an example:
sed -E 's/[0-9]+/X/g' input.txt
This command will search the input.txt file for any sequences of one or more digits (0-9) and replace them with the letter "X". The -E option tells sed to use extended regular expressions, and the s/[0-9]+/X/g expression tells sed to perform a global (g) search and replace operation, replacing all occurrences of one or more digits with "X".
In a shell script, we can use the if statement to execute a block of commands based on the value of a particular expression. The syntax for an if statement is:
if condition then command1 command2 ... fi
We can also use the else and elif (else-if) clauses to specify additional conditions and blocks of commands to execute.
Below is an illustration of an if statement that checks to see if a file is present:
#!/bin/bash if [ -f "/path/to/file" ] then echo "File exists" else echo "File does not exist" fi
In this example, the [ -f "/path/to/file" ] is a test that checks whether the file at the specified path exists and is a regular file. If the test evaluates to true, the echo command will be executed; if it evaluates to false, the else block will be executed.
We can also use the case statement to execute a block of commands based on the value of a particular expression. The syntax for a case statement is:
case expression in pattern1) command1 command2 ... ;; pattern2) command1 command2 ... ;; ... esac
Below is an example of a case statement that tests the value of a variable:
#!/bin/bash case $VAR in abc) echo "VAR is set to 'abc'" ;; def) echo "VAR is set to 'def'" ;; *) echo "VAR is set to something else" ;; esac
In this example, the case statement will execute the appropriate block of commands based on the value of the $VAR variable. The * pattern is a catch-all that will match any value not matched by the other patterns.
Errors and exceptions in a shell script can be handled using the following methods:
exit status: Every command in a shell script returns an exit status. An exit status of zero indicates success, while a non-zero exit status indicates failure. We can use the $? variable to check the exit status of a command and take appropriate action based on the status. For example:
#!/bin/bash # run a command some_command # check its exit status if [ $? -eq 0 ]; then echo "Command succeeded" else echo "Command failed" fi
try-catch blocks: Bash version 4.0 and above support the try and throw statements for handling errors and exceptions. The try block contains the code that might throw an exception, and the catch block contains the code to handle the exception. Below is an example:
#!/bin/bash try() { # code that might throw an exception some_command # throw an exception if the command failed if [ $? -ne 0 ]; then throw "Exception: some_command failed" fi } catch() { # code to handle the exception echo $1 } # run the try block try # if an exception was thrown, run the catch block if [ $? -eq 0 ]; then echo "try block succeeded" else echo "try block failed" fi
trap: The trap command allows us to specify a command to run when a particular signal is received by the script. We can use the trap command to handle exceptions and errors in our script. For example:
#!/bin/bash # define the trap function trap 'echo "Error: command failed" >&2; exit 1' ERR # run a command that might fail some_command # remove the trap trap - ERR echo "Command succeeded"
set -e: We can use the set -e option to make the script exit immediately if any command returns a non-zero exit status. This can be useful for handling errors and exceptions in your script. For example:
#!/bin/bash # set the -e option set -e # run a command that might fail some_command echo "Command succeeded"
Expect to come across this popular question in UNIX Scripting interviews.
There are many ways to work with files and directories in a shell script. Here are a few common ones:
Listing Files and Directories: We can use the ls command to list the files and directories in a directory. For example, to list all the files and directories in the current directory, we can use:
#!/bin/bash Ls
We can use various options with the ls command to customize the output. For example, to list the files and directories in a long format, sorted by modification time, we can use:
#!/bin/bash ls -lt
Changing Directories: We can use the cd command to change the current working directory.
For example, to change the current directory to /etc, we can use:
#!/bin/bash cd /etc
Reading from Files: We can use the cat command to print the contents of a file to the terminal. For example, to print the contents of a file myfile.txt, we can use:
#!/bin/bash cat myfile.txt
We can also use the more and less commands to view the contents of a file, which allows to page through the file.
Writing to Files: We can use the echo command to write text to a file. For example, to write the text "Hello, World!" to a file myfile.txt, you can use:
#!/bin/bash echo "Hello, World!" > myfile.txt
We can also use the tee command to write the output of a command to a file, while still printing it to the terminal. For example:
#!/bin/bash some_command | tee myfile.txt
Copying Files: We can use the cp command to copy a file. For example, to copy a file src.txt to dest.txt, we can use:
#!/bin/bash cp src.txt dest.txt
We can also use the cp command to copy directories. For example, to copy a directory src to dest, we can use:
#!/bin/bash cp -r src dest
Moving and Renaming Files: We can use the mv command to move or rename a file. For example, to rename a file src.txt to dest.txt, we can use:
#!/bin/bash mv src.txt dest.txt
We can also use the mv command to move a file to a different directory. For example, to move a file src.txt to the /tmp directory:
#!/bin/bash mv src.txt /tmp
Removing Files: We can use the rm command to remove a file. For example, to remove a file myfile.txt:
#!/bin/bash rm myfile.txt
We can also use the rm command to remove directories.
Here are a few common pitfalls to watch out for when writing shell scripts:
To pass arguments to a shell script, we can simply list them after the script name, separated by space. For example:
./myscript.sh arg1 arg2 arg3
Inside the script, we can access the arguments using the $1, $2, $3, etc. variables. For example:
#!/bin/bash echo "Argument 1: $1" echo "Argument 2: $2" echo "Argument 3: $3"
We can also use the $* and $@ variables to access all of the arguments as a single string or as separate strings, respectively.
For example:
#!/bin/bash # Print all arguments as a single string echo "All arguments: $*" # Print all arguments as separate strings echo "All arguments: $@"
We can also use the $# variable to get the total number of arguments passed to the script.
For example:
#!/bin/bash echo "Total number of arguments: $#"
In Unix and Unix-like operating systems, a file or directory name that starts with a dot (.) is considered to be a hidden file or directory. Hidden files and directories are not normally displayed when listing the contents of a directory using commands such as ls.
To list hidden files and directories, we can use the ls -a command, which will show all files and directories, including hidden ones.
The dot (.) at the beginning of a file name has no special meaning to the operating system, but it is often used by convention to indicate a configuration file or other file that is meant to be hidden from normal view. For example, the file .bashrc in a user's home directory is a configuration file for the Bash shell, and the file .gitignore is used to specify files that should be ignored by Git.
Keep in mind that the dot (.) is also used to represent the current directory and the parent directory in a file path. For example,./myfile refers to the file myfile in the current directory, and ../myfile refers to the file myfile in the parent directory.
The printf command is an alternative to echo that is available in Unix and Unix-like operating systems. Like echo, printf is used to output text to the console or to a file.
One advantage of printf over echo is that it is more flexible and can be used to print text in a specific format. It can also handle escape sequences and special characters more reliably than echo.
Here is an example of how printf can be used:
printf "Hello, %s\n" "world"
This will print the string "Hello, world" followed by a newline character. The %s is a placeholder for a string, and the \n is an escape sequence that represents a newline.
We can use other placeholders to print different types of data, such as integers (%d), floating point numbers (%f), and more.
For example:
printf "Number: %d\n" 42 printf "Float: %.2f\n" 3.1415
This will print "Number: 42" and "Float: 3.14", respectively.
We should keep in mind that printf is not available in all shells and may not be available on all systems. However, it is a useful tool to have in your toolkit when working with shell scripts.
To find out how long a Unix or Linux system has been running, we can use the uptime command. This command will show the current time, the length of time the system has been running, the number of users logged in, and the load average over the last 1, 5, and 15 minutes.
For example:
$ uptime 21:52:06 up 3 days, 6:22, 2 users, load average: 0.00, 0.00, 0.00
In this example, the system has been running for 3 days and 6 hours (3 days, 6:22).
We can also use the who -b command to show the time that the system was last booted. For example:
$ who -b system boot 2021-07-14 21:50
This will show the date and time that the system was last booted.
Keep in mind that these commands may not work on all systems, and the output may vary depending on the specific operating system and version.
A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Shell Scripting interview questions.
To find out which shells are available on the Unix or Linux system, we can use the cat command to display the contents of the /etc/shells file. This file contains a list of all the shells that are available on the system.
For example:
$ cat /etc/shells # /etc/shells: valid login shells /bin/sh /bin/bash /usr/bin/sh /usr/bin/bash /usr/local/bin/bash
We can also use chsh (change shell) command to see which shells are available and to change the default shell for the user account. For example:
$ chsh Changing the login shell for username Enter the new value, or press ENTER for the default Login Shell [/bin/bash]:
This will display a list of available shells, and we can choose one from the list or enter the path to a different shell.
We should know that the list of available shells may vary depending on the specific operating system and version, and may include other shells such as zsh, csh, and more.
There are several commands available in Unix and Unix-like operating systems to check the disk usage of a file system. Some of the most common ones are:
df: The df command displays information about the available and used disk space on a file system. By default, it shows the size, used space, and available space for all file systems.
For example:
$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 469059992 415097728 53962256 89% /
We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB.
For example:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 450G 391G 51G 89% /
du: The du command displays the disk usage of individual directories and files. By default, it shows the sizes of directories and their contents in blocks.
For example:
$ du 16 ./dir1 8 ./dir1/file1 4 ./dir1/file2 4 ./dir2 4 ./dir2/file1 8 .
We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB.
For example:
$ du -h 16K ./dir1 8.0K ./dir1/file1 4.0K ./dir1/file2 4.0K ./dir2 4.0K ./dir2/file1 8.0K .
We can also use the -c option to show the total size of all directories and files.
For example:
$ du -ch 16K ./dir1 8.0K ./dir1/file1 4.0K ./dir1/file2 4.0K ./dir2 4.0K ./dir2/file1 8.0K . 32K total
ncdu is a text-based disk usage viewer that allows you to navigate through directories and see the sizes of individual files and directories in real-time. It is a useful tool for finding and deleting large files and directories to free up space on your file system.
To use ncdu, we need to run the ncdu command and navigate through the directories using the arrow keys and the enter key. Pressing d will delete a file or directory and pressing q will exit the program.
We should know that these are just a few examples of the commands available to check disk usage. There are many other tools and options available, depending on specific needs and preferences.
It's no surprise that this one pops up often in Shell Scripting interview questions for DevOps.
Awk is a command-line utility that allows you to perform operations on a stream of text data, such as extracting and transforming the contents of a file or generating reports from data. It is particularly useful for working with structured data, such as tab-separated values (TSV) or comma-separated values (CSV).
Below is a brief overview of how awk works:
Awk reads input one line at a time and splits each line into fields based on a predefined separator (by default, this is a whitespace character).
The fields can then be accessed and manipulated using special variables (e.g., $1 refers to the first field, $2 to the second field, etc.).
awk processes each line according to a set of rules, which specify the actions to be taken based on the contents of the fields.
The modified lines are then printed to the output.
Here is an example of using awk to print the second and fifth fields of a file containing tab-separated values:
//code awk -F'\t' '{print $2, $5}' input.txt > output.txt
This command reads the contents of input.txt, sets the field separator to a tab character (-F'\t'), and then prints the second and fifth fields of each line to output.txt.
Awk is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex operations.
Practice more Shell Scripting interview questions and answers like this to make a lasting impression on your recruiters.
UNIX provides several security provisions for protecting files and data:
It is important to note that these security provisions are not mutually exclusive, and we can use a combination of these measures to protect our files and data.
sed is a command-line utility that allows you to perform text transformations on an input stream (a file or input from a pipeline). It is commonly used for extracting part of a file, transforming the contents of a file, or deleting lines from a file.
Here is a brief overview of how sed works:
sed reads input one line at a time and performs the specified transformation on each line.
The transformations are specified using a set of commands, which are provided as arguments to sed.
The modified lines are then printed to the output.
Here is an example of using sed to replace all occurrences of the word "apple" with the word "banana" in a file:
//code sed 's/apple/banana/g' input.txt > output.txt
This command reads the contents of input.txt, performs a substitution on each line to replace "apple" with "banana", and writes the modified lines to output.txt. The g at the end of the substitution command specifies that the substitution should be performed globally on each line (i.e., all occurrences of "apple" should be replaced, not just the first one).
sed is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex transformations.
Bash is a weakly typed language because it does not require variables to be explicitly declared with a specific data type. Instead, variables in bash are automatically interpreted based on the context in which they are used. This means that we can use a variable in one part of the script as a string, and then use it in another part of the script as an integer, for example.
This can be both a strength and a weakness of bash. On one hand, it makes it easy to use variables and write scripts quickly, as we don't have to worry about declaring the data types of the variables. On the other hand, it can also make it easy to introduce bugs into our scripts, as we might not realize that a variable is being interpreted differently than we intended.
Overall, the weakly typed nature of bash is something that we should be aware of as we write scripts, and we should ensure that the variables are being used in the way that we intended.
In Unix-like systems, the pipe operator (|) is used to redirect the output of one command to the input of another command. It allows us to chain multiple commands together and process the output of one command as the input of the next command.
For example, consider the following command:
ls -l | grep "foo"
This command will list the files in the current directory using the ls command, and then filter the output to display only the lines that contain the string "foo" using the grep command. The output of the ls command is piped to the grep command using the | operator, allowing the grep command to process the output of the ls command as its input.
We can use multiple pipe operators to chain together multiple commands. For example:
cat file.txt | grep "foo" | sort | uniq -c
This command will display the contents of the file "file.txt" using the cat command, filter the output to display only the lines that contain string "foo" using the grep command, sort the output alphabetically using the sort command, and count the number of occurrences of each unique line using the uniq command with the -c option.
Overall, the pipe operator allows us to run several commands in one line by chaining them together and processing the output of one command as the input of the next. This can be a powerful and efficient way to manipulate data and perform complex tasks on the command line.
One of the most frequently posed Shell Scripting interview questions, be ready for it.
In a shell script, you can use the kill command to send a signal to a process. The kill command takes two arguments: the process ID of the process that we want to signal and the signal that we want to send.
For example, the following script sends the SIGINT signal (which is equivalent to pressing Ctrl+C) to the process with the ID 12345:
#!/bin/bash kill -SIGINT 12345
We can also use the killall command to send a signal to all processes with a particular name. For example, the following script sends SIGKILL signal (which forcibly terminates the process) to all processes with the name "foo":
#!/bin/bash killall -SIGKILL foo
We can use the ps command to list the processes running on the system, and the grep command to filter the output to only show processes with a particular name. For example, the following script lists all processes with the name "foo":
#!/bin/bash ps aux | grep foo
We can also use the wait command to wait for a process to complete before continuing with the rest of the script. The wait command takes the process ID of the process that we want to wait for as an argument. For example:
#!/bin/bash foo & pid=$! wait $pid echo "The foo process has completed"
The `&` symbol at the end of the `foo` command runs the process in the background. The `pid` variable stores the process ID of the `foo` process. The `wait` command then waits for the `foo` process to complete before continuing with the rest of the script.
In a shell script, we can use processes and signals to manage and control the execution of our script. By using commands like `kill`, `killall`, and `wait`, we can manage the processes running on our system and ensure that the script executes correctly.
Here are some tips for optimizing the performance of a shell script:
A staple in Shell Scripting interview questions, be prepared to answer this one.
Shell scripts are often used to automate tasks on a Unix or Linux system. They can be used to configure systems and applications, perform maintenance tasks, and more.
Shell scripts can be used to automate tasks related to working with containers and container orchestration tools like Docker and Kubernetes. Below are a few examples of how shell scripts can be used in this context:
To manipulate and manage processes with shell scripts, you can use the command-line utilities that are available on your system. Some common utilities for managing processes include:
Here is an example of how you might use these utilities in a shell script to list, kill, and then confirm the termination of a process:
#!/bin/bash # List the process ps aux | grep process-name # Kill the process kill $(ps aux | grep process-name | awk '{print $2}') # Confirm that the process has been terminated if [ $(ps aux | grep process-name | wc -l) -eq 0 ] then echo "Process terminated successfully" else echo "Error: process not terminated" Fi
We can also use other utilities, such as systemctl or service, to start, stop, and manage system services. For example:
# Start a service systemctl start service-name # Stop a service systemctl stop service-name # Restart a service systemctl restart service-name
Shell scripts can be used to integrate with other tools or systems in a variety of ways. Some common ways to use shell scripts for integration include:
Overall, shell scripts are a powerful tool for automating and integrating a wide variety of tasks and processes within a larger ecosystem.
There are many ways to use shell scripts to automate tasks such as deployment or testing. Here are a few examples:
Deployment: Shell scripts can be used to automate the process of deploying software to various environments. For example, we might use a shell script to perform the following tasks as part of a deployment process:
Testing: Shell scripts can be used to automate the process of running tests against software. For example, you might use a shell script to perform the following tasks as part of a testing process:
This question is a regular feature in Shell Scripting interview questions for DevOps, be ready to tackle it.
Below are several ways to use shell scripts to interact with APIs and other external services. Below are a few examples:
curl: The curl command can be used to send HTTP requests to an API and receive the response. For example:
curl https://api.example.com/endpoint
jq: The jq command is a tool for parsing and manipulating JSON data. It can be used in combination with curl to extract specific values from the API response. For example:
curl https://api.example.com/endpoint | jq .key
wget: The wget command can be used to download files from the web, including files returned by an API. For example:
wget https://api.example.com/endpoint/file.zip
grep: The grep command can be used to search for specific patterns in text. It can be used in combination with other commands, such as curl, to extract specific information from the output. For example:
curl https://api.example.com/endpoint | grep pattern
There are many other tools and techniques that can be used to interact with APIs and other external services from a shell script.
Shell scripts can be used to manage and manipulate files and directories in a variety of ways. Some common tasks that can be performed using shell scripts include:
Overall, shell scripts provide a powerful set of tools for managing and manipulating files and directories on a system, allowing us to automate tasks such as backups, file cleanup, and more.
To check if a file exists on the filesystem using a bash shell script, we can use the test command with the -f option. Below is an example of how we might use this command to check if a file called "file.txt" exists:
if test -f "file.txt"; then echo "File exists" else echo "File does not exist" fi
Alternatively, we can use the [ -f "file.txt" ] syntax to achieve the same result.
Below is an example of how we might use this command in a script:
#!/bin/bash if [ -f "file.txt" ]; then echo "File exists" else echo "File does not exist" fi
This script will check for the existence of a file called "file.txt" and will print "File exists" if the file exists or "File does not exist" if the file does not exist.
We can also use the -d option to check for the existence of a directory or the -e option to check for the existence of either a file or a directory. This is also one of the most frequently asked Shell Scripting questions.
The difference between [[ $string == "efg*" ]] and [[ $string == efg* ]] is the presence of double quotes around the string being compared.
In the first example, [[ $string == "efg*" ]], the double quotes around "efg*" indicate that the string should be treated literally, and the * character should be interpreted as a literal asterisk. This means that the expression will only evaluate to true if the value of $string is exactly "efg*".
In the second example, [[ $string == efg* ]], the double quotes are not present, which means that the * character will be interpreted as a wildcard. This means that the expression will evaluate to true if the value of $string starts with "efg" followed by any number of characters.
For example:
string="efgabc" if [[ $string == "efg*" ]]; then echo "Match with double quotes" else echo "No match with double quotes" fi if [[ $string == efg* ]]; then echo "Match without double quotes" else echo "No match without double quotes" fi
The output of this script would be:
No match with double quotes Match without double quotes
Overall, the use of double quotes can be important in bash to ensure that strings are treated literally, and to prevent special characters from being interpreted as wildcards or other special symbols.
The crontab command is used to schedule tasks to be executed automatically at a specified time. When we use the crontab command to schedule a task, the task is stored in one of two files:
To view the tasks in user crontab file, we can use the crontab -l command. To edit the user crontab file, we can use the crontab -e command. To remove the user crontab file, we can use the crontab -r command.
To schedule a task using the crontab command, we need to specify the time and date when the task should be executed, as well as the command that should be executed. The time and date are specified using a special syntax called the "crontab format," which consists of five fields: minute, hour, day of month, month, and day of week. Each field can contain a single value, a list of values, or a range of values, separated by commas.
For example, to schedule a task to be executed every hour, you might use a crontab entry like this:
0 * * * * /path/to/command
This entry specifies that the task should be run every hour, at the top of the hour (when the minute field is 0).
To count the number of words in a given file using a bash shell script, we can use the wc command with the -w option.
Here is an example of a command sequence that we can use to count the words in a file called "file.txt":
# Count the number of words in the file word_count=$(wc -w "file.txt") # Print the result echo "Number of words: $word_count"
This script will execute the wc command with the -w option, which counts the number of words in the file. The output of the wc command will be captured by the $(...) syntax and stored in the word_count variable. The script will then print the value of the word_count variable using the echo command.
The wc command can also be used to count the number of lines in a file (using the -l option) or the number of bytes in a file (using the -c option).
For example, to count the number of lines in a file called "file.txt", we could use the following script:
# Count the number of lines in the file line_count=$(wc -l "file.txt") # Print the result echo "Number of lines: $line_count"
And to count the number of bytes in a file called "file.txt", you could use the following script:
# Count the number of bytes in the file byte_count=$(wc -c "file.txt") # Print the result echo "Number of bytes: $byte_count"
In Unix-like operating systems, the "s" permission bit is used to set the setuid (or setgid) permission on a file. When the setuid permission is set on a file, it allows the file to be executed with the permissions of the owner of the file, rather than the permissions of the user who is executing the file. This can be useful for allowing users to execute a program with superuser privileges, even if they do not have the necessary permissions to run the program directly.
For example, considering a file called "sudo", which is owned by the root user and has the setuid permission set. If a user executes this file, it will be run with the permissions of the root user, even if the user does not have root permissions themselves. This can be useful for allowing users to execute commands that require superuser privileges, such as installing software or modifying system settings.
The setgid permission works in a similar way, but it allows the file to be executed with the permissions of the group owner of the file, rather than the permissions of the user who is executing the file.
The setuid and setgid permissions are represented by the "s" permission bit in the file's permissions string. For example, if a file has the permissions "rwxsr-xr-x", the "s" permission bits indicate that the setuid and setgid permissions are set on the file.
It is important to use the setuid and setgid permissions carefully, as they can be a security risk if used improperly. In particular, it is important to make sure that setuid and setgid programs are carefully designed and implemented, as vulnerabilities in these programs can be exploited to gain unauthorized access to the system.
Expect to come across this popular question in UNIX Scripting interviews.
To create a directory with the desired permissions, I can use the mkdir command and specify the -m option to set the permissions of the directory.
To allow anyone in the group to create and access files in the directory, but not delete files created by others, I can use the following permissions:
To create the directory with these permissions, I will use the following command:
mkdir -m 775 /path/to/directory
This will create the directory with the permissions rwxrwxr-x, which will allow the owner and members of the group to create and access files in the directory but will prevent others from deleting files created by others.
It is also important to note that setgid permission can be used to ensure that new files created in the directory are automatically owned by the group, rather than the user who created the file. To set the setgid permission on the directory, I will use the chmod command with the g+s option:
chmod g+s /path/to/directory
will set the setgid permission on the directory, ensuring that new files created in the directory are automatically owned by the group.
To monitor a log file that is constantly updating, we can use the tail command with the -f option. The tail command is used to display the last few lines of a file, and the -f option allows it to follow the file and display new lines as they are added to the file.
For example, to monitor the log file "log.txt" and display new lines as they are added to the file, we can use the following command:
tail -f log.txt
This will display the last few lines of the log file, and then continue to display new lines as they are added to the file. The tail command will keep running until we stop it, so we can use it to effectively monitor the log file in real-time.
We can also use the -n option to specify the number of lines to display, or the -s option to specify the interval between updates.
For example, to display the last 100 lines of the log file and update the display every 5 seconds, we can use the following command:
tail -n 100 -s 5 log.txt
Overall, the tail command is a useful tool for monitoring log files and other files that are constantly updating. It allows us to view new lines as they are added to the file, making it easier to track changes and identify issues.
To set a connection to a remote server where we can execute commands, we'll need the following methods.
Overall, the choice of which method to use to connect to a distant server will depend on your specific needs and the capabilities of the server. SSH and Telnet are commonly used for command-line access to servers, while remote desktop protocols are more suitable for interacting with a graphical desktop environment.
A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Scripting interview questions.
To find the number of lines in a file that contains the word "LINUX" using a bash shell script, we can use the grep command with the -c option.
Below is an example of a script that you can use to find the number of lines in a file called "file.txt" that contain the word "LINUX":
# Find the count of lines containing "LINUX" in the file line_count=$(grep -c "LINUX" "file.txt") # Print the result
echo "Number of lines containing LINUX: $line_count"
This script will execute the grep command with the -c option, which searches for the specified pattern ("LINUX") in the file and counts the number of lines that match the pattern. The output of the grep command will be captured by the $(...) syntax and stored in the line_count variable. The script will then print the value of the line_count variable using the echo command.
We can also use the -i option to ignore case when searching for the pattern, or the -w option to match only whole words.
For example, to search for the pattern "linux" regardless of case, we can use the following script:
# Find the count of lines containing "linux" in the file, ignoring case line_count=$(grep -ci "linux" "file.txt") # Print the result echo "Number of lines containing linux: $line_count"
These types of questions are shell scripting scenario-based interview questions and generally asked with experienced candidates.
To print a list of every user's login names on a Unix-like system, we can use the cut and sort commands to extract the login names from the /etc/passwd file and sort them alphabetically.
Below is an example of a command sequence that we can use to print a list of login names:
# Extract the login names from the /etc/passwd file login_names=$(cut -d: -f1 /etc/passwd) # Sort the login names alphabetically sorted_login_names=$(echo "$login_names" | sort) # Print the login names echo "$sorted_login_names"
This script will use the cut command to extract the first field (the login name) from each line of the /etc/passwd file, using the : character as the delimiter. The output of the cut command will be stored in the login_names variable.
The script will then use the sort of command to sort the login names alphabetically, and store the sorted list in the sorted_login_names variable. Finally, the script will use the echo command to print the login names.
The /etc/passwd file is a system file that contains information about the users on the system, including their login names, home directories, and other details. Each line of the file represents a single user, and the fields are separated by: characters.
There are several ways to send mail using a shell script. One option is to use the mail command, which is a command-line utility for sending and receiving mail.
To use the mail command to send a message, you can use the following syntax:
echo "message" | mail -s "subject" recipient@example.com
This will send a message with the specified subject to the specified recipient. The message can be entered directly on the command line, or it can be piped to the mail command from another command.
For example, to send a message with the subject "Hello" to the recipient "user@example.com", you can use the following command:
echo "Hello, how are you?" | mail -s "Hello" user@example.com
We can also use the -a option to attach a file to the message or the -c option to specify a carbon copy recipient.
For example, to send a message with the subject "Hello" to the recipient "user@example.com", with a carbon copy to "cc@example.com" and an attachment "attachment.txt", we can use the following command:
echo "Hello, here is the attachment you requested." | mail -s "Hello" -a "attachment.txt" -c cc@example.com user@example.com
A common question in Shell Scripting interviews, don't miss this one.
A program written in a shell programming language, such as bash, csh, or ksh, is referred to as a shell script. Shell scripts are used to automate operations that are frequently carried out with a shell, such as launching a web server, configuring a development environment, or delivering software.
Shell scripts are advantageous because they enable the automation of several commands that would otherwise require human execution. This can cut down on time and error-prone work. Additionally portable, shell scripts can be executed on any machine that has a compatible shell set up.
Shell scripts are frequently used in software development, DevOps, and system management chores. They can be utilized to automate processes for maintaining infrastructure, producing, and deploying software, and conducting tests. Additionally, they can be used to automate platform- or operating-system-specific operations.
Shell scripts can be used to develop unique utilities and tools, as well as to increase the functionality of already existing tools, in addition to automating operations. They can be used to execute operations that would be challenging or time-consuming to perform manually or to automate repetitive or complex processes.
To create a shell script, I will need a text editor, such as gedit, vim, or emacs. I can also use a word processor, such as Microsoft Word or Google Docs, but have to make sure to save the file as plain text.
To create a shell script, I will follow these steps:
#!/bin/bash
chmod +x script.sh
./script.sh
I can also run the script by specifying the path to the interpreter followed by the path to the script, like this:
/bin/bash script.sh
It is a good practice to include comments in our script to explain what the different parts of the script do. In a shell script, comments are denoted by a pound sign (#). Anything following the pound sign on the same line will be treated as a comment and ignored by the shell.
For example:
# This is a comment
echo "Hello, world!" # This is also a comment
This is one of the most frequently asked UNIX Shell Scripting interview questions.
The #! (shebang) is a special line at the beginning of a shell script that tells the operating system which interpreter to use to execute the script. The shebang line consists of a pound sign (#) and an exclamation point (!) followed by the path to the interpreter executable. For example, the shebang line for a bash script might look like this:
#!/bin/bash
The shebang line is not required for all shell scripts, but it is a good practice to include it so that the script can be run directly from the command line without having to specify the interpreter. For example, if we include the shebang line in a script and make the script executable (using chmod +x script.sh), we can run the script simply by typing ./script.sh instead of bash script.sh.
Without a shebang line, the operating system will not know which interpreter to use to execute the script, and the script will not be able to run.
In a shell script, comments are lines of text that are not executed as part of the script but are included to provide documentation and explanations for the code. Comments are used to make the code easier to understand and to help other developers (or future versions of ourselves) understand what the code is doing.
To include a comment in a shell script, we can use the # character followed by the text of the comment. For example:
# This is a comment
Anything following the `#` character on the same line will be treated as a comment, so you can use comments to annotate specific lines of code:
# Increment the value of the counter variable
counter=$((counter+1))
We can also use comments to include multi-line blocks of documentation or explanations:
: '
This is a multi-line comment.
This comment block can be used to include
detailed explanations or documentation for
the code below.
'
Note that the `:` command is a shell built-in that does nothing, so it can be used to create a block of comments without actually executing any code. The single quotes around the comment block are necessary to prevent the shell from interpreting the # characters as the start of a comment.
Expect to come across this popular question in UNIX Scripting interviews.
To run a shell script, I will need to have a shell interpreter installed on my system. Common shell interpreters include bash, csh, and ksh.
To run a shell script, I will follow these steps:
I will make sure that the script is executable. I can make a script executable by running the chmod command and setting the executable flag:
chmod +x script.sh
Run the script by typing:
./script.sh
Alternatively, we can specify the path to the interpreter followed by the path to the script, like this:
/bin/bash script.sh
We can replace /bin/bash with the path to the shell interpreter that we want to use.
We can also run a shell script by passing it to the interpreter as a command-line argument:
bash script.sh
Replace `bash` with the name of the shell interpreter that we want to use.
The path to the shell interpreter we want to utilize should be specified in a shebang line at the beginning of our script. This allows us to run the script by simply typing `./script.sh`, regardless of the default shell interpreter on our system.
For example, if our script is written in bash, we can include the following shebang line at the top of the script:
#!/bin/bash
This tells the system to use the bash interpreter to run the script.
We can also specify command-line arguments when running a shell script. These arguments are passed to the script as variables and can be accessed within the script using the `$1`, `$2`, `$3`, and so on.
For example, the following script prints the first command-line argument:
#!/bin/bash
echo "The first argument is $1"
To run the script and pass it a command-line argument, we can type:
./script.sh foo
This will print "The first argument is foo".
In a shell script, we can perform input and output redirection using the < and > symbols.
For example, to redirect the output of a command to a file, we can use the > symbol followed by the name of the file. For example:
# Redirect the output of the "ls" command to a file called "directory_listing.txt"
ls > directory_listing.txt
To append the output of a command to a file, you can use the >> symbol instead of >. For example:
# Append the output of the "ls" command to the file "directory_listing.txt"
ls >> directory_listing.txt
To redirect the input of a command from a file, you can use the < symbol followed by the name of the file. For example:
# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt"
sort < unsorted_list.txt > sorted_list.txt
You can also use multiple redirections in a single command. For example:
# Sort the contents of the file "unsorted_list.txt" and store the result in "sorted_list.txt", then display the results on the screen
sort < unsorted_list.txt > sorted_list.txt | cat
In this example, the output of the sort command is redirected to the file "sorted_list.txt", and the cat command is used to display the contents of the file on the screen.
Note, that these redirections can also be used in combination with other shell commands and constructs, such as loops and conditional statements.
Below is a list of some common shell commands that you might use in a shell script:
These are just a few examples, and there are many more shell commands that we can use in a shell script. Some commands are specific to certain shells (e.g., Bash, Zsh, etc.), while others are available in most shells.
This is one of the most commonly asked shell scripting commands interview questions.
There are many different shell environments that are commonly used, including:
In a shell script, command line arguments are stored in special variables. The first argument is stored in the variable $1, the second argument is stored in the variable $2, and so on. The variable $0 contains the name of the script itself.
Below is an example of a simple script that uses command line arguments:
#!/bin/bash echo "The first argument is: $1" echo "The second argument is: $2" echo "The third argument is: $3"
To use command line options in a script, we can use the getopts built-in command. This command allows us to specify which options the script should accept, and then provides a way for the script to access the values of those options.
Here is an example of a script that uses the getopts command:
#!/bin/bash while getopts ":a:b:" opt; do case $opt in
echo "Option -a was specified with value $OPTARG" ;;
echo "Option -b was specified with value $OPTARG" ;; \?) echo "Invalid option: -$OPTARG" exit 1 ;; :) echo "Option -$OPTARG requires an argument." exit 1 ;; esac done
In this example, the script will accept two options, -a and -b, which can be followed by a value. The getopts command will parse the command line arguments and set the variables $opt and $OPTARG accordingly. The script can then use these variables to determine which options were specified and what their values were.
In a shell script, we can use the for loop to iterate over a sequence of values. The syntax for a for loop is:
for variable in list do command1 command2 ... done
Here is an example of a for loop that iterates over a range of numbers:
#!/bin/bash for i in {1..10} do echo "$i" done
This script will output the numbers 1 through 10.
You can also use the while loop to execute a block of commands repeatedly while a particular condition is true. The syntax for a while loop is:
while condition do command1 command2 ... done
Here is an example of a while loop that counts down from 10 to 1:
#!/bin/bash i=10 while [ $i -gt 0 ] do echo "$i" i=$((i-1)) done
This script will output the numbers 10 through 1.
We can also use the break and continue statements to control the flow of a loop. The break statement will cause the loop to terminate early, while the continue statement will skip the rest of the current iteration and move on to the next one.
There are several techniques you can use to debug a shell script:
There are a few situations where shell programming/scripting may not be the best choice:
That being said, shell scripts are still a very useful tool for automating a wide variety of tasks and can be a good choice in many situations. It's important to consider the specific requirements of your task and choose the right tool for the job.
The default permissions of a file when it is created depends on the umask (user mask) of the user who is creating the file. The umask is a value that determines the default permissions for newly created files and directories. It is specified in octal notation, and the permissions it specifies are subtracted from the default permissions set by the system.
For example, if the umask is set to 022 (octal), the default permissions for a newly created file will be 644 (rw-r--r--). This means that the owner of the file will have read and write permissions, and others will have read-only permissions.
If the umask is set to 002 (octal), the default permissions for a newly created file will be 664 (rw-rw-r--). This means that the owner and members of the owner's group will have read and write permissions, and others will have read-only permissions.
We can view and change the umask of our user account using the umask command. For example, to set the umask to 022 (octal), we can use the command umask 022.
Keep in mind that the default permissions of a file may also be influenced by the permissions of the directory in which the file is being created, and by any default ACLs (access control lists) that are set on the system.
There are four key elements of a Linux file system:
The kernel is the central part of a computer operating system that controls all the other parts of the system. It is responsible for managing the hardware and software resources of the system, including the CPU, memory, and input/output devices.
The kernel is the lowest level of the operating system and is responsible for interfacing with the hardware and providing a platform for other software to run on top of. It is a fundamental part of the operating system and is essential for the proper functioning of the computer.
In Linux, we can make a variable unchangeable by using the readonly builtin command in the Bash shell. This command can be used to mark a shell variable as read-only, which means that it cannot be modified or unset.
Below is an example of how to use the readonly command:
$ x=10 $ readonly x $ x=20
bash: x: readonly variable
In the example above, the variable x is first set to the value 10. The readonly command is then used to mark x as a read-only variable. Finally, an attempt is made to change the value of x to 20, but this results in an error because x is a read-only variable.
It is important to note that the readonly command only works within the Bash shell and does not affect the variables in other processes or programs. It is also specific to the shell in which it is used, so a variable marked as read-only in one shell may not be read-only in another shell.
Shell scripting has several drawbacks that you should be aware of:
Overall, shell scripting is a useful tool for automating tasks and interacting with the command line, but it may not be the best choice for larger or more complex projects. In these cases, you may want to consider using a more powerful programming language such as Python or C++.
It's no surprise that this question pops up often in Shell Scripting interviews.
There are several ways to create a shortcut in Linux, depending on what we want to do and the desktop environment we are using. Here are a few options:
It is important to note that the steps to create a shortcut may vary depending on our specific Linux distribution and desktop environment.
In Linux, shell programs (also known as shell scripts) are typically stored in files with a .sh file extension. These files can be stored in any directory on the system, but there are a few common locations that are used:
It is important to note that these directories are just conventions, and we can store the shell scripts in any directory on the system. However, using one of these directories can make it easier for users to access and run the scripts.
A common Shell Scripting interview question for DevOps, don't miss this one.
In Linux, a hard link is a type of link that points directly to the inode of a file, while a soft link (also known as a symbolic link or symlink) is a type of link that points to the file name of a file. Here are some key differences between hard links and soft links:
Hard Links
Soft Links
In a shell script, we can define a function by using the following syntax:
function function_name { commands }
For example, the following script defines a function called greet that prints a greeting:
#!/bin/bash function greet { echo "Hello, world!" } Greet
We can also define a function using the function keyword followed by the function name, like this:
#!/bin/bash function greet { echo "Hello, world!" } Greet
To call a function, simply type its name followed by a set of parentheses. For example:
#!/bin/bash function greet { echo "Hello, world!" } Greet
We can also pass arguments to a function by including them within the parentheses when calling the function. For example:
#!/bin/bash function greet { echo "Hello, $1!" }
greet John
Inside the function, the arguments are referred to as $1, $2, $3, and so on. The first argument is $1, the second is $2, and so on.
We can use functions to modularize the script and make it easier to read and maintain. Functions can be used to perform a specific task, and then called multiple times throughout the script. This can make it easier to reuse code and avoid repeating the same code multiple times.
It is one of the basic questions asked for an intermediate level linux shell scripting interview questions.
One of the most frequently posed UNIX and Shell Scripting interview questions, be ready for it.
In a shell script, we can use variables to store data and manipulate it. To define a variable, we can use the following syntax:
variable_name=value
For example, the following script defines a variable called message and assigns it the value "Hello, world!":
#!/bin/bash message="Hello, world!" echo $message
To access the value of a variable, we can use the dollar sign ($) followed by the variable name. For example:
#!/bin/bash message="Hello, world!" echo $message
We can also use variables to store the output of a command. For example:
#!/bin/bash current_directory=$(pwd) echo "The current directory is $current_directory"
We can also use variables to store the results of arithmetic expressions. For example:
#!/bin/bash x=5 y=3 result=$((x + y)) echo "The result is $result"
In a shell script, it is important to remember that variables are case-sensitive and that they must be referenced using the dollar sign ($).
It is also a good practice to use descriptive names for variables to make the code easier to read and understand, and it can be helpful in bash scripting interview questions.
Here are some common shell scripting errors and ways to troubleshoot them:
Shell scripts often use constructs such as loops and conditional statements to perform tasks. Below are some examples:
for loop: The for loop is used to iterate over a list of items. For example:
for i in 1 2 3 4 5 do echo $i done
This script will print the numbers 1 through 5 to the console.
while loop: The while loop is used to execute a block of code repeatedly while a certain condition is met. For example:
counter=0 while [ $counter -lt 5 ] do echo $counter counter=$((counter+1)) done
This script will print the numbers 0 through 4 to the console.
if statement: The if statement is used to execute a block of code if a certain condition is met.
For example:
if [ $1 -gt 5 ] then echo "The first argument is greater than 5" else echo "The first argument is not greater than 5" fi
This script will print "The first argument is greater than 5" if the first argument passed to the script is greater than 5, and "The first argument is not greater than 5" otherwise.
case statement: The case statement is used to execute a block of code based on a value matching a pattern. For example:
case $1 in start) echo "Starting service..." # Start the service here ;; stop) echo "Stopping service..." # Stop the service here ;; *) echo "Invalid argument" ;; esac
This script will start the service if the first argument passed to the script is "start", stop the service if the first argument is "stop", and print "Invalid argument" if the first argument does not match either of those patterns.
This is one of the most frequently asked UNIX Shell Scripting interview questions.
Regular expressions, sometimes known as "regex," are an effective technique for finding patterns in text. In a shell script, we can use regular expressions with the grep command to search for patterns in a file or stream of input.
Below is an example of using regular expressions with grep in a shell script:
grep -E '^[0-9]+$' input.txt
This command will search the input.txt file for lines that consist only of one or more digits (0-9). The -E option tells grep to use extended regular expressions, and the ^ and $ characters match the start and end of the line, respectively.
We can also use regular expressions with the sed command to perform search and replace operations on a file or stream of input. Below is an example:
sed -E 's/[0-9]+/X/g' input.txt
This command will search the input.txt file for any sequences of one or more digits (0-9) and replace them with the letter "X". The -E option tells sed to use extended regular expressions, and the s/[0-9]+/X/g expression tells sed to perform a global (g) search and replace operation, replacing all occurrences of one or more digits with "X".
In a shell script, we can use the if statement to execute a block of commands based on the value of a particular expression. The syntax for an if statement is:
if condition then command1 command2 ... fi
We can also use the else and elif (else-if) clauses to specify additional conditions and blocks of commands to execute.
Below is an illustration of an if statement that checks to see if a file is present:
#!/bin/bash if [ -f "/path/to/file" ] then echo "File exists" else echo "File does not exist" fi
In this example, the [ -f "/path/to/file" ] is a test that checks whether the file at the specified path exists and is a regular file. If the test evaluates to true, the echo command will be executed; if it evaluates to false, the else block will be executed.
We can also use the case statement to execute a block of commands based on the value of a particular expression. The syntax for a case statement is:
case expression in pattern1) command1 command2 ... ;; pattern2) command1 command2 ... ;; ... esac
Below is an example of a case statement that tests the value of a variable:
#!/bin/bash case $VAR in abc) echo "VAR is set to 'abc'" ;; def) echo "VAR is set to 'def'" ;; *) echo "VAR is set to something else" ;; esac
In this example, the case statement will execute the appropriate block of commands based on the value of the $VAR variable. The * pattern is a catch-all that will match any value not matched by the other patterns.
Errors and exceptions in a shell script can be handled using the following methods:
exit status: Every command in a shell script returns an exit status. An exit status of zero indicates success, while a non-zero exit status indicates failure. We can use the $? variable to check the exit status of a command and take appropriate action based on the status. For example:
#!/bin/bash # run a command some_command # check its exit status if [ $? -eq 0 ]; then echo "Command succeeded" else echo "Command failed" fi
try-catch blocks: Bash version 4.0 and above support the try and throw statements for handling errors and exceptions. The try block contains the code that might throw an exception, and the catch block contains the code to handle the exception. Below is an example:
#!/bin/bash try() { # code that might throw an exception some_command # throw an exception if the command failed if [ $? -ne 0 ]; then throw "Exception: some_command failed" fi } catch() { # code to handle the exception echo $1 } # run the try block try # if an exception was thrown, run the catch block if [ $? -eq 0 ]; then echo "try block succeeded" else echo "try block failed" fi
trap: The trap command allows us to specify a command to run when a particular signal is received by the script. We can use the trap command to handle exceptions and errors in our script. For example:
#!/bin/bash # define the trap function trap 'echo "Error: command failed" >&2; exit 1' ERR # run a command that might fail some_command # remove the trap trap - ERR echo "Command succeeded"
set -e: We can use the set -e option to make the script exit immediately if any command returns a non-zero exit status. This can be useful for handling errors and exceptions in your script. For example:
#!/bin/bash # set the -e option set -e # run a command that might fail some_command echo "Command succeeded"
Expect to come across this popular question in UNIX Scripting interviews.
There are many ways to work with files and directories in a shell script. Here are a few common ones:
Listing Files and Directories: We can use the ls command to list the files and directories in a directory. For example, to list all the files and directories in the current directory, we can use:
#!/bin/bash Ls
We can use various options with the ls command to customize the output. For example, to list the files and directories in a long format, sorted by modification time, we can use:
#!/bin/bash ls -lt
Changing Directories: We can use the cd command to change the current working directory.
For example, to change the current directory to /etc, we can use:
#!/bin/bash cd /etc
Reading from Files: We can use the cat command to print the contents of a file to the terminal. For example, to print the contents of a file myfile.txt, we can use:
#!/bin/bash cat myfile.txt
We can also use the more and less commands to view the contents of a file, which allows to page through the file.
Writing to Files: We can use the echo command to write text to a file. For example, to write the text "Hello, World!" to a file myfile.txt, you can use:
#!/bin/bash echo "Hello, World!" > myfile.txt
We can also use the tee command to write the output of a command to a file, while still printing it to the terminal. For example:
#!/bin/bash some_command | tee myfile.txt
Copying Files: We can use the cp command to copy a file. For example, to copy a file src.txt to dest.txt, we can use:
#!/bin/bash cp src.txt dest.txt
We can also use the cp command to copy directories. For example, to copy a directory src to dest, we can use:
#!/bin/bash cp -r src dest
Moving and Renaming Files: We can use the mv command to move or rename a file. For example, to rename a file src.txt to dest.txt, we can use:
#!/bin/bash mv src.txt dest.txt
We can also use the mv command to move a file to a different directory. For example, to move a file src.txt to the /tmp directory:
#!/bin/bash mv src.txt /tmp
Removing Files: We can use the rm command to remove a file. For example, to remove a file myfile.txt:
#!/bin/bash rm myfile.txt
We can also use the rm command to remove directories.
Here are a few common pitfalls to watch out for when writing shell scripts:
To pass arguments to a shell script, we can simply list them after the script name, separated by space. For example:
./myscript.sh arg1 arg2 arg3
Inside the script, we can access the arguments using the $1, $2, $3, etc. variables. For example:
#!/bin/bash echo "Argument 1: $1" echo "Argument 2: $2" echo "Argument 3: $3"
We can also use the $* and $@ variables to access all of the arguments as a single string or as separate strings, respectively.
For example:
#!/bin/bash # Print all arguments as a single string echo "All arguments: $*" # Print all arguments as separate strings echo "All arguments: $@"
We can also use the $# variable to get the total number of arguments passed to the script.
For example:
#!/bin/bash echo "Total number of arguments: $#"
In Unix and Unix-like operating systems, a file or directory name that starts with a dot (.) is considered to be a hidden file or directory. Hidden files and directories are not normally displayed when listing the contents of a directory using commands such as ls.
To list hidden files and directories, we can use the ls -a command, which will show all files and directories, including hidden ones.
The dot (.) at the beginning of a file name has no special meaning to the operating system, but it is often used by convention to indicate a configuration file or other file that is meant to be hidden from normal view. For example, the file .bashrc in a user's home directory is a configuration file for the Bash shell, and the file .gitignore is used to specify files that should be ignored by Git.
Keep in mind that the dot (.) is also used to represent the current directory and the parent directory in a file path. For example,./myfile refers to the file myfile in the current directory, and ../myfile refers to the file myfile in the parent directory.
The printf command is an alternative to echo that is available in Unix and Unix-like operating systems. Like echo, printf is used to output text to the console or to a file.
One advantage of printf over echo is that it is more flexible and can be used to print text in a specific format. It can also handle escape sequences and special characters more reliably than echo.
Here is an example of how printf can be used:
printf "Hello, %s\n" "world"
This will print the string "Hello, world" followed by a newline character. The %s is a placeholder for a string, and the \n is an escape sequence that represents a newline.
We can use other placeholders to print different types of data, such as integers (%d), floating point numbers (%f), and more.
For example:
printf "Number: %d\n" 42 printf "Float: %.2f\n" 3.1415
This will print "Number: 42" and "Float: 3.14", respectively.
We should keep in mind that printf is not available in all shells and may not be available on all systems. However, it is a useful tool to have in your toolkit when working with shell scripts.
To find out how long a Unix or Linux system has been running, we can use the uptime command. This command will show the current time, the length of time the system has been running, the number of users logged in, and the load average over the last 1, 5, and 15 minutes.
For example:
$ uptime 21:52:06 up 3 days, 6:22, 2 users, load average: 0.00, 0.00, 0.00
In this example, the system has been running for 3 days and 6 hours (3 days, 6:22).
We can also use the who -b command to show the time that the system was last booted. For example:
$ who -b system boot 2021-07-14 21:50
This will show the date and time that the system was last booted.
Keep in mind that these commands may not work on all systems, and the output may vary depending on the specific operating system and version.
A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Shell Scripting interview questions.
To find out which shells are available on the Unix or Linux system, we can use the cat command to display the contents of the /etc/shells file. This file contains a list of all the shells that are available on the system.
For example:
$ cat /etc/shells # /etc/shells: valid login shells /bin/sh /bin/bash /usr/bin/sh /usr/bin/bash /usr/local/bin/bash
We can also use chsh (change shell) command to see which shells are available and to change the default shell for the user account. For example:
$ chsh Changing the login shell for username Enter the new value, or press ENTER for the default Login Shell [/bin/bash]:
This will display a list of available shells, and we can choose one from the list or enter the path to a different shell.
We should know that the list of available shells may vary depending on the specific operating system and version, and may include other shells such as zsh, csh, and more.
There are several commands available in Unix and Unix-like operating systems to check the disk usage of a file system. Some of the most common ones are:
df: The df command displays information about the available and used disk space on a file system. By default, it shows the size, used space, and available space for all file systems.
For example:
$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 469059992 415097728 53962256 89% /
We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB.
For example:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 450G 391G 51G 89% /
du: The du command displays the disk usage of individual directories and files. By default, it shows the sizes of directories and their contents in blocks.
For example:
$ du 16 ./dir1 8 ./dir1/file1 4 ./dir1/file2 4 ./dir2 4 ./dir2/file1 8 .
We can use the -h option to display the sizes in "human-readable" format, with units such as MB and GB.
For example:
$ du -h 16K ./dir1 8.0K ./dir1/file1 4.0K ./dir1/file2 4.0K ./dir2 4.0K ./dir2/file1 8.0K .
We can also use the -c option to show the total size of all directories and files.
For example:
$ du -ch 16K ./dir1 8.0K ./dir1/file1 4.0K ./dir1/file2 4.0K ./dir2 4.0K ./dir2/file1 8.0K . 32K total
ncdu is a text-based disk usage viewer that allows you to navigate through directories and see the sizes of individual files and directories in real-time. It is a useful tool for finding and deleting large files and directories to free up space on your file system.
To use ncdu, we need to run the ncdu command and navigate through the directories using the arrow keys and the enter key. Pressing d will delete a file or directory and pressing q will exit the program.
We should know that these are just a few examples of the commands available to check disk usage. There are many other tools and options available, depending on specific needs and preferences.
It's no surprise that this one pops up often in Shell Scripting interview questions for DevOps.
Awk is a command-line utility that allows you to perform operations on a stream of text data, such as extracting and transforming the contents of a file or generating reports from data. It is particularly useful for working with structured data, such as tab-separated values (TSV) or comma-separated values (CSV).
Below is a brief overview of how awk works:
Awk reads input one line at a time and splits each line into fields based on a predefined separator (by default, this is a whitespace character).
The fields can then be accessed and manipulated using special variables (e.g., $1 refers to the first field, $2 to the second field, etc.).
awk processes each line according to a set of rules, which specify the actions to be taken based on the contents of the fields.
The modified lines are then printed to the output.
Here is an example of using awk to print the second and fifth fields of a file containing tab-separated values:
//code awk -F'\t' '{print $2, $5}' input.txt > output.txt
This command reads the contents of input.txt, sets the field separator to a tab character (-F'\t'), and then prints the second and fifth fields of each line to output.txt.
Awk is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex operations.
Practice more Shell Scripting interview questions and answers like this to make a lasting impression on your recruiters.
UNIX provides several security provisions for protecting files and data:
It is important to note that these security provisions are not mutually exclusive, and we can use a combination of these measures to protect our files and data.
sed is a command-line utility that allows you to perform text transformations on an input stream (a file or input from a pipeline). It is commonly used for extracting part of a file, transforming the contents of a file, or deleting lines from a file.
Here is a brief overview of how sed works:
sed reads input one line at a time and performs the specified transformation on each line.
The transformations are specified using a set of commands, which are provided as arguments to sed.
The modified lines are then printed to the output.
Here is an example of using sed to replace all occurrences of the word "apple" with the word "banana" in a file:
//code sed 's/apple/banana/g' input.txt > output.txt
This command reads the contents of input.txt, performs a substitution on each line to replace "apple" with "banana", and writes the modified lines to output.txt. The g at the end of the substitution command specifies that the substitution should be performed globally on each line (i.e., all occurrences of "apple" should be replaced, not just the first one).
sed is a very powerful and versatile utility, and there are many additional commands and options available for performing more complex transformations.
Bash is a weakly typed language because it does not require variables to be explicitly declared with a specific data type. Instead, variables in bash are automatically interpreted based on the context in which they are used. This means that we can use a variable in one part of the script as a string, and then use it in another part of the script as an integer, for example.
This can be both a strength and a weakness of bash. On one hand, it makes it easy to use variables and write scripts quickly, as we don't have to worry about declaring the data types of the variables. On the other hand, it can also make it easy to introduce bugs into our scripts, as we might not realize that a variable is being interpreted differently than we intended.
Overall, the weakly typed nature of bash is something that we should be aware of as we write scripts, and we should ensure that the variables are being used in the way that we intended.
In Unix-like systems, the pipe operator (|) is used to redirect the output of one command to the input of another command. It allows us to chain multiple commands together and process the output of one command as the input of the next command.
For example, consider the following command:
ls -l | grep "foo"
This command will list the files in the current directory using the ls command, and then filter the output to display only the lines that contain the string "foo" using the grep command. The output of the ls command is piped to the grep command using the | operator, allowing the grep command to process the output of the ls command as its input.
We can use multiple pipe operators to chain together multiple commands. For example:
cat file.txt | grep "foo" | sort | uniq -c
This command will display the contents of the file "file.txt" using the cat command, filter the output to display only the lines that contain string "foo" using the grep command, sort the output alphabetically using the sort command, and count the number of occurrences of each unique line using the uniq command with the -c option.
Overall, the pipe operator allows us to run several commands in one line by chaining them together and processing the output of one command as the input of the next. This can be a powerful and efficient way to manipulate data and perform complex tasks on the command line.
One of the most frequently posed Shell Scripting interview questions, be ready for it.
In a shell script, you can use the kill command to send a signal to a process. The kill command takes two arguments: the process ID of the process that we want to signal and the signal that we want to send.
For example, the following script sends the SIGINT signal (which is equivalent to pressing Ctrl+C) to the process with the ID 12345:
#!/bin/bash kill -SIGINT 12345
We can also use the killall command to send a signal to all processes with a particular name. For example, the following script sends SIGKILL signal (which forcibly terminates the process) to all processes with the name "foo":
#!/bin/bash killall -SIGKILL foo
We can use the ps command to list the processes running on the system, and the grep command to filter the output to only show processes with a particular name. For example, the following script lists all processes with the name "foo":
#!/bin/bash ps aux | grep foo
We can also use the wait command to wait for a process to complete before continuing with the rest of the script. The wait command takes the process ID of the process that we want to wait for as an argument. For example:
#!/bin/bash foo & pid=$! wait $pid echo "The foo process has completed"
The `&` symbol at the end of the `foo` command runs the process in the background. The `pid` variable stores the process ID of the `foo` process. The `wait` command then waits for the `foo` process to complete before continuing with the rest of the script.
In a shell script, we can use processes and signals to manage and control the execution of our script. By using commands like `kill`, `killall`, and `wait`, we can manage the processes running on our system and ensure that the script executes correctly.
Here are some tips for optimizing the performance of a shell script:
A staple in Shell Scripting interview questions, be prepared to answer this one.
Shell scripts are often used to automate tasks on a Unix or Linux system. They can be used to configure systems and applications, perform maintenance tasks, and more.
Shell scripts can be used to automate tasks related to working with containers and container orchestration tools like Docker and Kubernetes. Below are a few examples of how shell scripts can be used in this context:
To manipulate and manage processes with shell scripts, you can use the command-line utilities that are available on your system. Some common utilities for managing processes include:
Here is an example of how you might use these utilities in a shell script to list, kill, and then confirm the termination of a process:
#!/bin/bash # List the process ps aux | grep process-name # Kill the process kill $(ps aux | grep process-name | awk '{print $2}') # Confirm that the process has been terminated if [ $(ps aux | grep process-name | wc -l) -eq 0 ] then echo "Process terminated successfully" else echo "Error: process not terminated" Fi
We can also use other utilities, such as systemctl or service, to start, stop, and manage system services. For example:
# Start a service systemctl start service-name # Stop a service systemctl stop service-name # Restart a service systemctl restart service-name
Shell scripts can be used to integrate with other tools or systems in a variety of ways. Some common ways to use shell scripts for integration include:
Overall, shell scripts are a powerful tool for automating and integrating a wide variety of tasks and processes within a larger ecosystem.
There are many ways to use shell scripts to automate tasks such as deployment or testing. Here are a few examples:
Deployment: Shell scripts can be used to automate the process of deploying software to various environments. For example, we might use a shell script to perform the following tasks as part of a deployment process:
Testing: Shell scripts can be used to automate the process of running tests against software. For example, you might use a shell script to perform the following tasks as part of a testing process:
This question is a regular feature in Shell Scripting interview questions for DevOps, be ready to tackle it.
Below are several ways to use shell scripts to interact with APIs and other external services. Below are a few examples:
curl: The curl command can be used to send HTTP requests to an API and receive the response. For example:
curl https://api.example.com/endpoint
jq: The jq command is a tool for parsing and manipulating JSON data. It can be used in combination with curl to extract specific values from the API response. For example:
curl https://api.example.com/endpoint | jq .key
wget: The wget command can be used to download files from the web, including files returned by an API. For example:
wget https://api.example.com/endpoint/file.zip
grep: The grep command can be used to search for specific patterns in text. It can be used in combination with other commands, such as curl, to extract specific information from the output. For example:
curl https://api.example.com/endpoint | grep pattern
There are many other tools and techniques that can be used to interact with APIs and other external services from a shell script.
Shell scripts can be used to manage and manipulate files and directories in a variety of ways. Some common tasks that can be performed using shell scripts include:
Overall, shell scripts provide a powerful set of tools for managing and manipulating files and directories on a system, allowing us to automate tasks such as backups, file cleanup, and more.
To check if a file exists on the filesystem using a bash shell script, we can use the test command with the -f option. Below is an example of how we might use this command to check if a file called "file.txt" exists:
if test -f "file.txt"; then echo "File exists" else echo "File does not exist" fi
Alternatively, we can use the [ -f "file.txt" ] syntax to achieve the same result.
Below is an example of how we might use this command in a script:
#!/bin/bash if [ -f "file.txt" ]; then echo "File exists" else echo "File does not exist" fi
This script will check for the existence of a file called "file.txt" and will print "File exists" if the file exists or "File does not exist" if the file does not exist.
We can also use the -d option to check for the existence of a directory or the -e option to check for the existence of either a file or a directory. This is also one of the most frequently asked Shell Scripting questions.
The difference between [[ $string == "efg*" ]] and [[ $string == efg* ]] is the presence of double quotes around the string being compared.
In the first example, [[ $string == "efg*" ]], the double quotes around "efg*" indicate that the string should be treated literally, and the * character should be interpreted as a literal asterisk. This means that the expression will only evaluate to true if the value of $string is exactly "efg*".
In the second example, [[ $string == efg* ]], the double quotes are not present, which means that the * character will be interpreted as a wildcard. This means that the expression will evaluate to true if the value of $string starts with "efg" followed by any number of characters.
For example:
string="efgabc" if [[ $string == "efg*" ]]; then echo "Match with double quotes" else echo "No match with double quotes" fi if [[ $string == efg* ]]; then echo "Match without double quotes" else echo "No match without double quotes" fi
The output of this script would be:
No match with double quotes Match without double quotes
Overall, the use of double quotes can be important in bash to ensure that strings are treated literally, and to prevent special characters from being interpreted as wildcards or other special symbols.
The crontab command is used to schedule tasks to be executed automatically at a specified time. When we use the crontab command to schedule a task, the task is stored in one of two files:
To view the tasks in user crontab file, we can use the crontab -l command. To edit the user crontab file, we can use the crontab -e command. To remove the user crontab file, we can use the crontab -r command.
To schedule a task using the crontab command, we need to specify the time and date when the task should be executed, as well as the command that should be executed. The time and date are specified using a special syntax called the "crontab format," which consists of five fields: minute, hour, day of month, month, and day of week. Each field can contain a single value, a list of values, or a range of values, separated by commas.
For example, to schedule a task to be executed every hour, you might use a crontab entry like this:
0 * * * * /path/to/command
This entry specifies that the task should be run every hour, at the top of the hour (when the minute field is 0).
To count the number of words in a given file using a bash shell script, we can use the wc command with the -w option.
Here is an example of a command sequence that we can use to count the words in a file called "file.txt":
# Count the number of words in the file word_count=$(wc -w "file.txt") # Print the result echo "Number of words: $word_count"
This script will execute the wc command with the -w option, which counts the number of words in the file. The output of the wc command will be captured by the $(...) syntax and stored in the word_count variable. The script will then print the value of the word_count variable using the echo command.
The wc command can also be used to count the number of lines in a file (using the -l option) or the number of bytes in a file (using the -c option).
For example, to count the number of lines in a file called "file.txt", we could use the following script:
# Count the number of lines in the file line_count=$(wc -l "file.txt") # Print the result echo "Number of lines: $line_count"
And to count the number of bytes in a file called "file.txt", you could use the following script:
# Count the number of bytes in the file byte_count=$(wc -c "file.txt") # Print the result echo "Number of bytes: $byte_count"
In Unix-like operating systems, the "s" permission bit is used to set the setuid (or setgid) permission on a file. When the setuid permission is set on a file, it allows the file to be executed with the permissions of the owner of the file, rather than the permissions of the user who is executing the file. This can be useful for allowing users to execute a program with superuser privileges, even if they do not have the necessary permissions to run the program directly.
For example, considering a file called "sudo", which is owned by the root user and has the setuid permission set. If a user executes this file, it will be run with the permissions of the root user, even if the user does not have root permissions themselves. This can be useful for allowing users to execute commands that require superuser privileges, such as installing software or modifying system settings.
The setgid permission works in a similar way, but it allows the file to be executed with the permissions of the group owner of the file, rather than the permissions of the user who is executing the file.
The setuid and setgid permissions are represented by the "s" permission bit in the file's permissions string. For example, if a file has the permissions "rwxsr-xr-x", the "s" permission bits indicate that the setuid and setgid permissions are set on the file.
It is important to use the setuid and setgid permissions carefully, as they can be a security risk if used improperly. In particular, it is important to make sure that setuid and setgid programs are carefully designed and implemented, as vulnerabilities in these programs can be exploited to gain unauthorized access to the system.
Expect to come across this popular question in UNIX Scripting interviews.
To create a directory with the desired permissions, I can use the mkdir command and specify the -m option to set the permissions of the directory.
To allow anyone in the group to create and access files in the directory, but not delete files created by others, I can use the following permissions:
To create the directory with these permissions, I will use the following command:
mkdir -m 775 /path/to/directory
This will create the directory with the permissions rwxrwxr-x, which will allow the owner and members of the group to create and access files in the directory but will prevent others from deleting files created by others.
It is also important to note that setgid permission can be used to ensure that new files created in the directory are automatically owned by the group, rather than the user who created the file. To set the setgid permission on the directory, I will use the chmod command with the g+s option:
chmod g+s /path/to/directory
will set the setgid permission on the directory, ensuring that new files created in the directory are automatically owned by the group.
To monitor a log file that is constantly updating, we can use the tail command with the -f option. The tail command is used to display the last few lines of a file, and the -f option allows it to follow the file and display new lines as they are added to the file.
For example, to monitor the log file "log.txt" and display new lines as they are added to the file, we can use the following command:
tail -f log.txt
This will display the last few lines of the log file, and then continue to display new lines as they are added to the file. The tail command will keep running until we stop it, so we can use it to effectively monitor the log file in real-time.
We can also use the -n option to specify the number of lines to display, or the -s option to specify the interval between updates.
For example, to display the last 100 lines of the log file and update the display every 5 seconds, we can use the following command:
tail -n 100 -s 5 log.txt
Overall, the tail command is a useful tool for monitoring log files and other files that are constantly updating. It allows us to view new lines as they are added to the file, making it easier to track changes and identify issues.
To set a connection to a remote server where we can execute commands, we'll need the following methods.
Overall, the choice of which method to use to connect to a distant server will depend on your specific needs and the capabilities of the server. SSH and Telnet are commonly used for command-line access to servers, while remote desktop protocols are more suitable for interacting with a graphical desktop environment.
A must-know for anyone heading into a Shell Scripting interview, this is one of the most frequently asked UNIX Scripting interview questions.
To find the number of lines in a file that contains the word "LINUX" using a bash shell script, we can use the grep command with the -c option.
Below is an example of a script that you can use to find the number of lines in a file called "file.txt" that contain the word "LINUX":
# Find the count of lines containing "LINUX" in the file line_count=$(grep -c "LINUX" "file.txt") # Print the result
echo "Number of lines containing LINUX: $line_count"
This script will execute the grep command with the -c option, which searches for the specified pattern ("LINUX") in the file and counts the number of lines that match the pattern. The output of the grep command will be captured by the $(...) syntax and stored in the line_count variable. The script will then print the value of the line_count variable using the echo command.
We can also use the -i option to ignore case when searching for the pattern, or the -w option to match only whole words.
For example, to search for the pattern "linux" regardless of case, we can use the following script:
# Find the count of lines containing "linux" in the file, ignoring case line_count=$(grep -ci "linux" "file.txt") # Print the result echo "Number of lines containing linux: $line_count"
These types of questions are shell scripting scenario-based interview questions and generally asked with experienced candidates.
To print a list of every user's login names on a Unix-like system, we can use the cut and sort commands to extract the login names from the /etc/passwd file and sort them alphabetically.
Below is an example of a command sequence that we can use to print a list of login names:
# Extract the login names from the /etc/passwd file login_names=$(cut -d: -f1 /etc/passwd) # Sort the login names alphabetically sorted_login_names=$(echo "$login_names" | sort) # Print the login names echo "$sorted_login_names"
This script will use the cut command to extract the first field (the login name) from each line of the /etc/passwd file, using the : character as the delimiter. The output of the cut command will be stored in the login_names variable.
The script will then use the sort of command to sort the login names alphabetically, and store the sorted list in the sorted_login_names variable. Finally, the script will use the echo command to print the login names.
The /etc/passwd file is a system file that contains information about the users on the system, including their login names, home directories, and other details. Each line of the file represents a single user, and the fields are separated by: characters.
There are several ways to send mail using a shell script. One option is to use the mail command, which is a command-line utility for sending and receiving mail.
To use the mail command to send a message, you can use the following syntax:
echo "message" | mail -s "subject" recipient@example.com
This will send a message with the specified subject to the specified recipient. The message can be entered directly on the command line, or it can be piped to the mail command from another command.
For example, to send a message with the subject "Hello" to the recipient "user@example.com", you can use the following command:
echo "Hello, how are you?" | mail -s "Hello" user@example.com
We can also use the -a option to attach a file to the message or the -c option to specify a carbon copy recipient.
For example, to send a message with the subject "Hello" to the recipient "user@example.com", with a carbon copy to "cc@example.com" and an attachment "attachment.txt", we can use the following command:
echo "Hello, here is the attachment you requested." | mail -s "Hello" -a "attachment.txt" -c cc@example.com user@example.com
A common question in Shell Scripting interviews, don't miss this one.
Use Comments in Scripts At All Times
This is a suggested technique that applies to all programming languages and not only shell scripting. Making notes in a screenplay makes it easier for you or someone else reading it to comprehend what each section of the script does. To begin with, the # symbol defines comments.
Clean Code
Declare every single one of the global variables first, followed by every single one of the functions. Use local variables when writing functions and put the main body of the sentence after the functions. In your routines, the if statement, and at the conclusion of the script, use an explicit exit status code.
Using Some Trap for Unexpected Termination
While your script is running, users can stop it by using Ctrl-c. You must restore a directory or file to its original condition if your script modified them. This circumstance calls for the command trap.
Ctrl-c produces the SIGINT signal whenever it is used by a user. SIGTERM signals are sent when a user stops a process.
Exit Codes
Any Unix command that returns control to its parent process gives an exit code, which is an integer between 0 and 255.
However, these are processed modulo 256, so exit -10 is similar to exit 246, and exit 257 is equivalent to exit 1. Other integers are allowed. These can be used in shell scripts to modify the execution flow based on the success or failure of executed commands.
Automate GIT with Shell Script
Automate GIT with shell script for pushing and pulling the code. Although with Jenkins, we can directly do it, we can use this way if the whole server build is not required.
Print Everything That you do on Terminal
Typically, scripts alter the status of a system. But because we cannot predict when a user will send us a SIGINT or when a script fault will result in an unexpected script termination, it is helpful to print anything you are doing to the terminal so that the user may follow along without having to open the script.
Shell programming is heavily focused on files, texts, lines, and words, whereas Java and C++ applications often conceive in terms of an object paradigm. Instead of working with objects, shell programming connects applications using textual transformations.
Before we can connect things, you must first comprehend what they each accomplish independently. Additionally, the items are arbitrarily complicated, posing recursive problems. Both Sed and Awk, which are fully-fledged programming languages in and of themselves, make extensive use of regular expressions.
Shell scripting is not at all difficult after you have mastered (1) the notions of data streams (pipelines, standard in/out), (2) the concept of commands and command line arguments and options, and (3) (most challenging) the precise effect of so-called shell metacharacters.
As a user of a Linux-based operating system, you must be familiar with all the fundamental commands and applications that meet your computing needs. You can organize these instructions and programs using Shell Scripting (Programming) techniques to automate their execution. You can also add complexity by integrating all these commands using programming logic to address a challenging computing issue. The best certifications for programmers will help you acquire more knowledge about programming and upscale your skills. If you want to pursue a career as a Linux administrator or engineer, this course is perfect for you.
There are several opportunities available from numerous reputable businesses worldwide. According to studies, around 17 percent of the market is occupied by Linux Shell Scripting. Since 2018, Linux has begun to significantly grow its market. The average pay for a shell programming talent is $81,951, according to PayScale. Learning shell scripting interview questions would be an excellent place to start if you want to advance your DevOps or system administrator career.
According to glassdoor.com, IBM Linux Engineer salaries - 5 salaries reported $136,407/year. The average salary for a Linux Systems Engineer is $109307 per year in the US.
If you are determined to ace your next interview as a Linux Engineer/ DevOps Engineer, these shell-scripting interview questions and answers will fast-track your career and helps you to boost your confidence while giving interviews. To relieve you of the worry and burden of preparation for your upcoming interviews, we have compiled the above list of interview questions for Shell scripting with answers prepared by industry experts. Being well-versed with these commonly asked UNIX Shell Scripting interview questions will be your very first step toward a promising career as DevOps/Linux Engineer.
You can opt for various job profiles after being certified as a Linux Engineer. A few are listed below:
If you wish to build a career as a Linux administrator or Linux engineer, you can learn more about KnowledgeHut Programming certifications from the best training available. Crack your Shell Scripting Interview questions with ease and confidence!
Submitted questions and answers are subjecct to review and editing,and may or may not be selected for posting, at the sole discretion of Knowledgehut.
Get a 1:1 Mentorship call with our Career Advisor
By tapping submit, you agree to KnowledgeHut Privacy Policy and Terms & Conditions