upGrad KnowledgeHut SkillFest Sale!

SQL Server Interview Questions and Answers

SQL Server is a relational database management supporting a wide variety of transaction processing, business intelligence and more. SQL Server is the Microsoft name for its own RDBMS product. Be it a beginner, or an intermediate or an expert of SQL Server, this write-up will aid you to boost your confidence and knowledge of SQL. The questions provided here will level up your knowledge and learning of SQL Server, alongside providing step-by-step explanations for every question that will help you understand the concepts in detail. With SQL Server interview questions by your side, you can be positive about your preparation for the upcoming interview.

  • 4.6 Rating
  • 93 Question(s)
  • 30 Mins of Read
  • 8251 Reader(s)

Beginner

UNION blends the contents of two structurally-compatible tables into a solitary joined table. The distinction among UNION and UNION ALL is that UNION will discard duplicate records through UNION ALL will incorporate duplicate records. Note that the presentation of UNION ALL will commonly be superior to UNION  since UNION requires the server to do the extra work of expelling any duplicate. In this way, in situations where there is a surety that there won't be any copies, or where having copies isn't an issue, utilization of UNION ALL eventual suggested for performance reasons. Let's have a look at the below examples explaining the usage of both. In the first, we have used UNION and in the second we have explained UNION ALL.

Union example code

  • Inner Join- This is the simplest one and most widely used JOIN. This is default one if we do not specify any JOIN between tables. It returns all matching records from both tables where the matching condition is satisfied.
  • LEFT JOIN- We call this LEFT OUTER JOIN as well. When we have situations like we want all columns from one table and only matching records from another table, we go for this type of JOIN. There are two types of same. The LEFT one is the first type where we would like to have all records from LEFT table and only matching records from RIGHT one then we go for this type of JOIN. When we do not have any matching condition in the right table then all columns from the right one will be NULL while LEFT table will have all the records.
  • RIGHT JOIN- We call this RIGHT OUTER JOIN as well. This is just the reverse of what we discussed for LEFT JOIN. The result set will be having all records from the right table but only matching records from the left one. Even if the ON clause which gives matching record fails, it will ensure rows are returned from the right table and corresponding columns from the LEFT table with NULL values.
  • FULL JOIN- It is also called FULL OUTER JOIN. It is having characteristics of both LEFT /RIGHT outer join. The result set will be having rows whenever we have the match in any of the tables either LEFT or RIGHT one. We can also say that it gives the same result if we apply UNION on LEFT and RIGHT OUTER JOIN resultset.
  • CROSS JOIN- This is a cartesian product of two tables where each row from the primary table is joined with each and every row of the secondary table. Even if we use SELECT statement on two tables separated by a comma and without any WHERE condition, it will give the same result as we get from applying CROSS JOIN.
  • Master: It contains framework catalogs that keep data about disk space, record portions, use, system-wide setup settings, login accounts, the presence of other databases, and the presence of other SQL Servers (for appropriate activities). If this database does not exist or corrupted then the SQL Server instance cannot start. Although we user objects in the master database, it is not advised to do so. This database should always remain as static as possible. In the case of a master database being rebuilt, all user objects will be lost.
  • Model: It is basically a template database. Each time you make another database, SQL Server makes a duplicate of a model to frame the premise of the new database. Any changes made to this database, related DB size, collation, recovery model, and any other configurations, are applied to any new database created afterward.
  • Tempdb: Temporary database, tempdb, is a workspace. SQL Server tempdb database is one of a kind among every single other database since it is reproduced not recuperated each time SQL Server is started.
  • Msdb: This database is utilized by the SQL Server Agent Service, which performs planned exercises, for example, backup and replication assignments.

system databases

Normalization and denormalization are the strategies utilized in databases. The terms are differential where Normalization is a procedure of limiting the addition, removal and update peculiarities through disposing of the redundant information. Then again, Denormalization is the reverse procedure of Normalization where the repetition is added to the information to improve the exhibition of the particular application and information integrity. Normalization prevents the disk space wastage by limiting or disposing of the redundancy.

Normalization v/s denormalization in SQL

The indexing is required to quicken search results in the database. If we compare index in our real world then page number of books and keywords mostly on the back side of book work similar as Index. We can quickly go to respective pages if we know the page number and also if we have an idea of keywords, we are looking into the book then just visiting keywords section will make our job easier as keywords are linked with page numbers. There are two types of indexes that exist in SQL database. One is called clustered while other is called non-clustered. The page number of the book is similar to the clustered index while the keyword section of the book represents non-clustered indexes. They exist in the database as a B-Tree structure. Let's go into the details of each index.

  • Clustered Index 

We can have only one clustered index per table. Once we create a primary key, by default it ends up creating a clustered index on that key. We can also specify the index type as well in case we would like to have non clustered index as well. The table is called heap if it does not have any clustered index. As we said earlier that indexes are B-Tree structures. The leaf node is mapped to data pages of the table in case of a clustered index. The data is physically sorted based on the clustered index. This is the reason why it is quickest in two of the indexes.

  • Non-clustered Index 

Earlier it was possible to have only 249 non clustered indexes on the table but after SQL server 2008 we can have 999 non clustered indexes. It gets created by default when we create unique key constraint. We can also decide index type as well while creating Unique constraint. It is not mandatory to have any non clustered index. The leaf nodes of non clustered index map to index pages having details of clustering key or RID to get the actual row. In case the table does not have any clustered index, leaf nodes are mapped to the physical location of the row which is called as RID and if present leaf nodes mapped to clustering key. It is much quicker for DML operations like adding /deleting and updating records.

The magic tables are an integral part of Triggers. They facilitate Triggers and make the job easier.  There are two magic tables that exist, one is inserted one and the other one is called deleted one. These are built by SQL server itself to hold the data while processing gets triggered. They hold the inserted, deleted and updated records while doing DML operations on a table. If we have triggers for INSERT operation then the newly inserted values are stored in the INSERTED magic table. If triggers get fired for UPDATE /DELETE operation then updated and deleted values of records are stored in DELETED magic tables.

  • INSERTED Magic table

Let me explain the INSERTED magic table by looking at the below example. In the below screenshot we have triggers for INSERT operation and we can see two tables INSERTED/DELETED which we can use in the trigger for performing any manipulation.

INSERTED Magic table code

  • DELETED Magic table

The below example illustrates the use case of a DELETED magic table which comes into the picture when we have UPDATE/DELETE trigger. This table holds the old record which was either updated/deleted.

DELETED Magic table code

SQL server favors all three relationships and is well supported by SQL server design. Let me explain to you each relationship one by one:

One to One – This type of relationship is supported by a single table and in some cases, we can have two tables as well. As the name suggests, we can have only a single record from each entity, primary and secondary. A person can have only one passport, he can not have more than one. In below example we have two tables Person and Passport having one to one relationship by leveraging foreign key and forcing unique key constraint on the foreign key. In this case, person ID which is the primary key in one table works as the foreign key.

One to One

One to Many – This type of relations is always supported by two tables. This relationship focuses on at least one entry in the secondary table for each entry in the primary table. But in the primary table, we will always have a single entry for each record against each record of a secondary table. Let me try to understand you with below examples where we have two table book and author. A book can have more than one writer so there will always be more than one entry of book in author table but there will always be a single entry of author in the author table. This type of relationship is supported by a primary key-foreign key relationship. Here Book Id is working as a foreign key in the Author table to support one to many.

One to Many

Many to Many – This type of relationship is realized by more than two tables where one table works as Join table between the first two tables. In the below example we have Students, Enrollments, and Classes as three tables where Enrollment table is working as a bridge between Students and classes. One student can enroll in multiple classes and one class can be mapped to multiple students.

Many to Many

One of the most frequently posed SQL interview questions for freshers, be ready for it.  

There are several scenarios in which we use temp tables. In case we have no DDL or DML access to a table. You can utilize your current read access to maneuver the information into a SQL Server temp table and make modifications from that point. Or on the other hand, you don't have the authorization to make a table in the current database, you can make a SQL Server temp table that you can control. At long last, you may be in a circumstance where you need the information to be available just in the present session.

The temp tables have "hashtag" followed by the table name. For instance: #Table_name. SQL temp tables are made in the tempdb database. A local SQL Server temp table is just available to the present session. It can't be accessed or utilized by procedures or queries outside of the session. A standout frequently utilized situation for SQL Server temp tables is when we use a loop in the query. For instance, you need to process information for a SQL query and we need a spot to store information for our loop to peruse. It gives a speedy and productive way to do as such.  Another motivation to utilize SQL Server temp tables is when we have to execute some expensive processing. Suppose that we make a join, and each time we have to pull records from that result-set then it needs to process this join again and again. Why not simply process this outcome set once and toss the records into a SQL temp table? At that point, you can have the remainder of the SQL query allude to the SQL temp table name. In addition to the fact that this saves costly processing, it might even make our code look a little cleaner.

The SQL temp table is dropped or demolished once the session drops. Most of the time we will see: DROP #tbl command but it is not mandatory. Temp tables are always created in the tempdb database. It resolves name conflict by adding a suffix in case of a similar name for temp tables.

SQL temp tables vs table variables

Global SQL temp tables are helpful when we need to reference tables in all sessions. Anybody can add, modify, or delete records from the table. Additionally, note that anybody can DROP the table. Like Local SQL Server temp tables, they are dropped once the session drops and there are never again any more references to the table. We can generally utilize the "DROP" command to tidy it up manually.

Table variables are more like some other variables. It is a common understanding that table variables exist just in memory, yet that is basically not correct. They live in the tempdb database just like local SQL Server temp tables. Additionally like local SQL temp tables, table variables are accessible just inside the session in which they are declared.  The table variable is just available inside the present batch.

Code

If performance is criterion then table variables are helpful with modest quantities of data. Generally, a SQL Server temp table is helpful when filtering through a lot of information. So for most times, we will in all probability observe the utilization of a SQL Server temp table rather than a table variable.

Self-join is the one in which the same table is joined by itself to get the desired output. We can understand self join in case of the below scenario :

Let's assume we have  below table structure for Employee :

  1. EmpId- The key identifier for Employee
  2. MgrID-The key identifier for Manager.which is mapped to EmpID
  3. empname-The Employee name

self join code

If we need to extract employee and manager information from the above table then the only option we have is to using self join. We can make a self join on the Employee table on joining ManagerID with EmployeeID. Please find below query which explains self join clearly :

self join code

Although we can have other alternatives as well to get the same desired result set. It uses left outer Join:

self join code

CTE is virtually created temporary storage which holds query output created by SQL syntax SELECT, INSERT, UPDATE, DELETE OR CREATE VIEW statement. It holds the data until the time of the query session. It is similar to the derived table only. We can also  use the CTE in case of VIEW in some scenarios:

With common table expression

WITH
expression _name
column_name
AS
CTE_query_definition
WITH common_table_expression  Railroad diagram
  • expression_name: Is a valid identifier for the common table expression. It must be different from the others defined within the same WITH clause, but it can be the same as the name of a base table or view. Any reference to it in the query uses the common table expression and not the base object
  • column_name: Specifies a unique column name in the common table expression. The number of column names specified must match the number of columns in CTE _query_definition

If the query definition supplies distinct names for all columns then the column names are optional

  • CTE_query_definition: This is a SELECT statement whose result set populates the common table expression. This must meet the same requirements as for creating a view except that a CTE cannot define another CTE

Let me list down some benefits of using CTE :

  • When we have a recursive query we can use CTE. It holds a query output in an area that is defined during CTE definition. It is more beneficial when we need to work on the queries query. It holds query data till the time the SQL query is running. It is beneficial for holding filter data which is needed for subsequent processing.
  • It improves the readability of queries without affecting performance.
  • It can be referenced multiple times in the SQL query.
  • It can be used instead of a view where metadata information does not require to be stored.

There are two types of CTE :

  • Recursive CTE: This type of reference itself within CTE. This is useful while working with hierarchical data since it executes until the time the whole hierarchy gets returned. A recursive query must contain two SQL query statements joined by UNION ALL, UNION, INTERSECT, or EXCEPT operator. In the below example we have Employee table having hierarchical data (Employee and Manager relationship)

Recursive CTE code

  • Non-Recursive CTE: This type does not reference itself. This is much simpler than other types of CTE.

Non-Recursive CTE code

One of the foremost common issues that occur when running parallel transactions is the Dirty read problem. A dirty read happens once when the transaction is permissible to read the information that's being changed by another one that is running at the same time however which has not nevertheless committed itself.

If the transaction that modifies the information and does the commits itself, the dirty read problem never occurs. but if the transaction, that has triggered the changes in the information, is rolled back when the opposite transaction has read the data, the latter transaction has dirty information that doesn’t truly exist.

Let us try to understand the scenario when a user tries to buy a product. The transaction which does the acquisition task for the user. the primary step within the transaction would be to update the Items in Stock.

Before the transaction, there are 12 items in the stock; the transaction can update this to 11 items. The transaction concurrently communicates with a third party payment gateway. If at this time in time, another transaction, let’s say Transaction 2, reads Items In Stock for laptops, it'll read 11. However, if after, the user who has triggered the first transaction A, seems to possess light funds in his account, transaction A is rolled back and therefore  ItemsInStock column can revert to again 12. However, transaction B has 11 items because it read old data from ItemsInStock column. This is often dirty information and therefore the drawback is termed a dirty scan problem.

Dirty read in SQL server

Both are a set of SQL statements which are encapsulated inside the block to complete some task. But they are very different in so many ways.

  • Stored Procedure- The set of SQL statements in Stored procedure is pre-compiled objects which get compiled for the very first time only and afterward it gets saved and reused.
  • Functions- These are executed and compiled every time it gets called. It is mandatory for the function to always return some value also it cannot do DML operations on data.

Let me list down some major differences for both :

  1. It is mandatory in function to return value while it is optional in a stored procedure.
  2. Function only support input parameters while stored procedures can have both input/output parameter.
  3. A function can be called from a stored procedure but not vice versa.
  4. The procedure allows SELECT as well DML operations as well but function allows only to SELECT.
  5. There are restrictions of using stored procedure in the WHERE/HAVING/SELECT section while nothing for function. We use function frequently in WHERE/HAVING/SELECT section.
  6. The transaction is possible in stored procedures but not in function.
  7. An exception can be taken care of in the stored procedure by using the TRY CATCH block but not in function.

It is very critical for any database to maintain data integrity and having consistent data. For choosing any database this is one of the important considerations. Data architects evaluate database on ACID  properties only. The ACID is an acronym and stands for Atomicity, Consistency Isolation, Durability.

ACID property in a database

Let me explain to you four pillars of ACID property:

  • Atomicity -These properties ensure that either all operation part of any database transaction will commit or none of the operations will be successful. This will ensure the consistency of data. Nothing partially gets committed in the database either all or none.
  • Consistency-This properties ensure that data will never be in a half-finished state. It ensures changes in data always happen in a consistent way.
  • Isolation-This ensure that all transactions run independently and without intervening one another and till the time each transaction is finished in its own way.
  • Durability- This property is very critical as it ensures that even in case of any failover, there is enough mechanism to recover data in the system.

A normalization which ensures a reduction in redundancy and dependability. It is a popular database breaking down large tables into smaller ones with the help of a relationship. Edgar code was the first one who coined the term normalization and came with three different forms of normalization(1NF,2NF,3NF). Raymond F. Boyce later added another form of normalization which has been named after both inventors (Boyce-Codd NF). Till now we have below normalization forms available so far:

Different normalization forms

  • 1st Normal Form
  • 2nd  3rd
  • Boyce-Codd NF
  • 4th    5th 6th

But in most practical scenario database design is well supported till 3NF only. Now, let us understand the first four forms of normalization:

  • 1NF:  1NF has the below characteristics :
    • Each row cell   should have a single value
    • Each row needs to be unique

1NF Example

  • 2NF: The second normal form is an extension of 1NF. It has two rules:
    • It has to be 1NF
    • It should have a single column as Primary Key

The above 1NF table can be extended to 2NF by diving above tables into below two tables:

2 NF Example

  • 3 NF: This normalization form has also two rules:
    • It has to be in 2 NF
    • It should not have a structure in which changes in non-key column value changes another non-key column

3 NF Example

For the above table structure, we can have below design to support 3 NF

3NF Example

  • BC NF: This is required when we have more than one candidate key.

The Primary key is the single column or combination of the column which uniquely distinguishes individual rows in the table. It must be unique but can not have  NULL values which make it different then UNIQUE KEY. A table can have only a single primary key and contain a single column or combination of columns called a composite key.  The foreign key in the table is actually the to the primary key of another table. Please find below the list of differences between two Keys which look similar in nature :

  1. Primary Key can not have NULL values while foreign key can have NULL values.
  2. Be default primary key creates clustered Index on the table which physically sorts out data based on key while there is no index created on the foreign key. We need to explicitly create indexes on the foreign key.
  3. We can have a single primary key in the table but we can have multiple foreign keys in the table.

Defining primary key and foreign key

UDF represents user-defined represents which are designed to accept the parameter and does processing on parameter before returning a result set of processing or actions. The result could be either a scalar value or a complete result set. The UDF’s are widely used in scripts, stored procedures and within other’s UDF as well.        

There are several benefits of using UDF :

  1. UDF is designed keeping in mind modular programming. Once you have created UDF, we can call the UDF multiple times and it is not dependent on source code so we can easily modify it.
  2. UDF is designed to reduce compilation cost by caching plan so we can reuse them without compiling it.
  3. The WHERE clause is considered as an expensive operation so if we have some filter on complex constraints we can easily create UDF for the same and same UDF can be replaced in WHERE clause.

There are two types of UDF based on the output value of UDF’s

  • The scalar functions-The output of this function is scalar values. It may accept zero or more parameters and can return any SQL defined data type other than text, ntext, image, cursor, and timestamp.

The scalar functions example

  • Table values functions- This type of function return table as resultset. It can accept zero or more parameters but always return table.

Table values functions example

A staple in SQL Server interview questions, be prepared to answer this one.  

Let me list down some basic difference then we will try to understand with examples as well :

  1. Truncate is lightning fast when we compare it with DELETE commands. The reason for the same is less usage of transaction logs in Truncate. DELETE erase records one by one and at the same time transaction logs get maintained for the get for every record. TRUNCATE  erases the records by deallocating space from pages and it makes a single entry in logs.
  2. Triggers are not fired in case of TRUNCATE while delete command fires trigger.
  3. The identity column values get reset in case of TRUNCATE while delete does not reset the identity column value.
  4. In the case of foreign key constraint or tables used in replication, Truncation happen.
  5. DELETE falls into the DML category while TRUNCATION falls into the DDL category of commands.

Please consider below example where even after deleting records when transaction was rolled back, it reverted the changes :

SQL Server delete with rollback

The WHERE clause is the most widely used command in SQL and used for filtering records on the result set. The HAVING clause does the same thing but on the grouped result set. The WHERE clause will be executed first before having a clause trigger. Let me try to explain the differences with examples. Any SQL statement follows below syntax:

command in SQL

The order of execution of SQL statements follows from top to bottom. It implies that records are filtered first on the WHERE clause and once the result set is grouped, HAVING clause comes into the picture.

Where clause

Where clause

If you find that the product level is not consistent across all the nodes, you will need to fool the 2005 patch installer into only patching the nodes that need updating. To do so, you will have to perform the following steps:

  1. Fail Instance, Cluster, and MSDTC groups to an unpatched node
  2. Remove any successfully patched nodes from failover candidates of the SQL Server Service of the instance group (do this using Cluster Admin tool)
  3. Run the patch
  4. After the patch installs successfully, add the Nodes removed in Step 2 back to the SQL Server Service of the Instance group

Why do you need to do this? Well when the patch installer determines that not all nodes in the cluster are at the same patch level, a passive node operation will fail and will prevent you from moving forward with any further patching.

  1. Make a note of all node names (and/or IP addresses), SQL Server virtual names along with preferred nodes. If there are more than three nodes you may need to also take note of possible owners for each SQL resource group. For my example assume that I have a cluster with node1 and node2, SQL1 normally lives on node1 and SQL2 normally lives on node2.
  2. To start with a clean slate and ensure any previous updates are completed both nodes should be restarted if possible. Choose the physical node that you want to patch second and restart that node (in my example node2).
  3. Restart the node you want to patch first (node1). This will mean that both active SQL instances are now running on node2. Some restarts will be essential, but you could avoid the first two restarts if you need to keep downtime to a minimum and just fail SQL1 over to node2. The main point here is to always patch a passive node.
  4. In a cluster, the administrator removes node1 from the possible owner's lists of SQL1 and SQL2. This means that neither SQL instance can fail over to node1 while it is being patched.
  5. Run the service pack executable on node1.
  6. Restart node1.
  7. Add node1 back into the possible owner's lists of SQL1 and SQL2 and fail both instances over to node1.
  8. Repeat steps 4 – 6 on node2.
  9. Add node2 back into the possible owner's lists of SQL1 and SQL2 and fail both instances over to node2. Check that the building level is correct and review the SQL Server error logs.
  10. Fail SQL1 over to node1. Check build levels and SQL Server error logs

Events:

Event Group: Performance

  • Event: ShowPlan_ALL (BinaryData column must be selected)
  • Event: ShowPlan_XML

Event Group: T-SQL

  • Event: SQL:BatchStarted
  • Event: SQL:BatchCompleted

Event Group: Stored Procedures

  • Event: RPC:Completed

Event Group: Locks

  • Event: Lock: Deadlock Graph
  • Event: Lock: Lock Deadlock Chain (Series of events that leads to a deadlock)

Event Group: Sessions

  • Event: Existing Connection

Event Group: Security Audit

  • Event: Audit Login
  • Event: Audit Log Out

Columns:

Below are the most common columns that help us in understanding the trace file to troubleshoot the problems.

  • TextData
  • ApplicationName
  • NTUserName
  • LoginName
  • CPU
  • Reads
  • Writes
  • Duration
  • SPID
  • StartTime
  • EndTime
  • Database Name
  • Error
  • HostName
  • LinkedServerName
  • NTDomainName
  • ServerName
  • SQLHandle

All these columns need not be available for all of the events but depend on the event select we have to choose the appropriate columns.

Filters:

  • ApplicationName
  • DatabaseName
  • DBUserName
  • Error
  • HostName
  • NTUserName
  • NTDomainName

different agents in replication

  • Snapshot Agent: Copy Schema+Data to snapshot folder on distributor. Used in all types of replication.
  • Log reader Agent: Sends transactions from Publisher to Distributor. Used in transactional replication
  • Distribution Agent: Applies Snapshots / Transactions to all subscribers’ runs at a distributor in PUSH and Runs at Subscriber in PULL. Used in transactional and transactional with updatable subscriptions.
  • Queue reader Agent: Runs at distributor send back transactions from subscriber to publisher. Used in Transactional With updatable subscriptions.
  • Merge Agent: Applies initial snapshot to subscribers, from the next time synchronize by resolving the conflicts.

Replication does not continue after a log shipping failover. If a failover occurs, replication agents do not connect to the secondary, so transactions are not replicated to Subscribers. If a fallback to the primary occurs, replication resumes. All transactions that log shipping copies from the secondary back to the primary are replicated to Subscribers.

For transactional replication, the behaviour of log shipping depends on the sync with a backup option. This option can be set on the publication database and distribution database; in log shipping for the Publisher, only the setting on the publication database is relevant.

Setting this option on the publication database ensures that transactions are not delivered to the distribution database until they are backed up at the publication database. The last publication database backup can then be restored at the secondary server without any possibility of the distribution database having transactions that the restored publication database does not have. This option guarantees that if the Publisher fails over to a secondary server, consistency is maintained between the Publisher, Distributor, and Subscribers. Latency and throughput are affected because transactions cannot be delivered to the distribution database until they have been backed up at the Publisher.

Before choosing the RAID (Redundant Array of Independent Disks) we should have a look into the usage of SQL Server files.

As a basic thumb rule “Data Files” need random access, “Log files” need sequential access and “TempDB” must be on the fastest drive and must be separated from data and log files.

We have to consider the below factors while choosing the RAID level:

factors to be considered while choosing the RAID level

  • Reliability
  • Storage Efficiency
  • Random Read
  • Random Write
  • Sequential Read
  • Sequential Write
  • Cost.

As an Admin, we have to consider all of these parameters in choosing the proper RAID level. Obviously, the choice is always between RAID-5 and RAID-10

There are three methods.

  1. Replication monitor
  2. Replication commands
  3. Tracer Tokens
  • Replication Monitor: In the replication monitor from the list of all subscriptions just double click on the desired subscription. There we find three tabs.
    • Publisher to Distributor History
    • Distributor to Subscriber History
    • Undistributed commands
  • Replication Commands: 
    • Publisher.SP_ReplTran: Checks the pending transactions at p
    • Distributor.MSReplCommands and MSReplTransactions: Gives the transactions and commands details. Actual T_SQL data is in binary format. From the entry time, we can estimate the latency.
    • Distributor.SP_BrowseReplCmds: It shows the eaxct_seqno along with the corresponding T-SQL command
    • sp_replmonitorsubscriptionpendingcmds: It shows the total number of pending commands to be applied at subscriber along with the estimated time.
  • Tracer Tokens:

Available from Replication Monitor or via TSQL statements, Tracer Tokens are special timestamp transactions written to the Publisher’s Transaction Log and picked up by the Log Reader. They are then read by the Distribution Agent and written to the Subscriber. Timestamps for each step are recorded in tracking tables in the Distribution Database and can be displayed in Replication Monitor or via TSQL statements.

When Log Reader picks up Token it records time in MStracer_tokens table in the Distribution database. The Distribution Agent then picks up the Token and records Subscriber(s) write time in the MStracer_history tables also in the Distribution database.

Below is the T-SQL code to use Tracer tokens to troubleshoot the latency issues.

–A SQL Agent JOB to insert a new Tracer Token in the publication database.
USE [AdventureWorks]
Go
EXEC sys.sp_posttracertoken @publication = <PublicationName>
Go
–Token Tracking Tables
USE Distribution
Go
–publisher_commit
SELECT Top 20 * FROM MStracer_tokens Order by tracer_id desc
–subscriber_commit
SELECT Top 20 * FROM MStracer_history Order by parent_tracer_id desc

It's no surprise that this one pops up often in SQL multiple choice questions.  

Intermediate

A page is a fundamental unit of storage that stores data. The page is the size of 8KB and we have 8 types of pages in SQL Server.

  • Data
  • Index
  • Text/Image (LOB, ROW_OVERFLOW, XML)
  • GAM (Global Allocation Map)
  • SGAM (Shared Global Allocation Map)
  • PFS (Page Free Space)
  • IAM (Index Allocation Map)
  • BCM (Bulk Change Map)
  • DCM (Differential Change Map)

An identity column in the table has auto-generate & auto increase value with each new row insert. The user cannot insert value in the identity column.

The sequence is a new feature introduced with SQL Server 2012 similar to Oracle’s sequence objects.  A sequence object generates a sequence of unique numeric values as per the specifications mentioned. Next VALUE for a SEQUENCE object can be generated using the NEXT VALUE FOR clause.

  • IDENTITY is column level property & tied to a particular table. This cannot be shared among multiple tables.
  • SEQUENCE is an object defined by the user with specific details and can be shared by multiple tables. This is not tied to any particular table.
  • IDENTITY property cannot be reset to its initial value but the SEQUENCE object can be reset to initial value any time.
  • Maximum value cannot be defined for IDENTITY whereas this can be done for SEQUENCE object.

The DMV (Dynamic Management Views) is a set of system views introduced with SQL Server 2005. DMV’s are a new tool of DBA to get internal information of the system and it’s working.

  • sys.dm_exec_query_stats
  • sys.dm_exec_sql_text
  • sys.dm_os_buffer_descriptors
  • sys.dm_tran_locks - Locking and blocking
  • sys.dm_os_wait_stats - Wait stats
  • sys.dm_exec_requests Percentage – For Percentage complete for a process
  • Sys.dm_exec_sessions

This is a frequently asked SQL Server interview question.  

SQL Server supports different types of isolation level.

  1. Read Uncommitted – Read Uncommitted is the lowest isolation level allow dirty reads. Here, Changes does by one transaction is visible to other transactions before committing.
  2. Read Committed – Read Committed s isolation does not allow dirty read. Any data read by the transaction is 100% committed. Till the time transaction is updating the record, it holds exclusive locks on it.
  3. Repeatable Read – Repeatable read is the most restrictive isolation level which holds locks on the table even with read transactions. Table data cannot be modified from any other sessions until the transaction is completed.
  4. Serializable – This is the Highest isolation level similar to repeatable read with the prevention of Phantom Read. It ensures transaction referencing same records must run in serially.
  5. Snapshot – This also known as RCSI (Read Committed Snapshot Isolation). It’s similar to Serializable Isolation but the only difference is Snapshot does not hold a lock on the table during the transaction so that the table can be modified in other sessions. Snapshot isolation maintains versioning in Tempdb for old data called Row Versioning.

Checkpoint

  • Checkpoint occur on Recovery time interval, Backup/Detach Command & on the execution of DDL command.
  • Checkpoint writes only dirty pages to the disk. Checkpoint does not release any memory.
  • The checkpoint is responsible for DB recovery point.
  • Checkpoint always mark entry in T-log before it executes either SQL engine or manually
  • Checkpoint occurrence can be monitored using performance monitor “SQL Server Buffer Manager Checkpoint/sec”.
  • You need SYSADMIN, DB_OWNER and DB_BACKUPOPERATOR rights to execute checkpoint manually

Lazy Writer

  • Lazy Writer release buffer pool memory when memory pressure occurs to ensure enough free memory.
  • Lazy Writer looks for LRU, least recently used (“cold” = least recently read or written, not accessed in recent time) pages in the buffer pool and releases the memory taken by them. Lazy writer releases both dirty and clean pages. Clean pages can be released without writing to disk and dirty pages release after written to the disk.
  • Lazy Writer is only responsible for managing free buffer space. The lazy writer does not affect or manage database recovery.
  • The lazy writer doesn’t mark any entry in T-log.
  • Lazy writer occurrence can be monitored using performance monitor “SQL Server Buffer Manager Lazy writes/sec”.
  • SQL server executes its own. User cannot run it manually
  • SQL Server Database Engine
  • SQL Server Analysis Services (SSAS)
  • SQL Server Reporting Services (SSRS)
  • SQL Server Integration Services (SSIS)
  • SQL Server Management Studio
  • SQL Server Profiler
  • SQL Server Configuration Manager
  • SQL Server CEIP (Telemetry Services)
  • SSIS CEIP (Telemetry Services)
  • Database Tuning Advisor
  • SQ Profiler
  • R Service
  • Connectivity Components
  • Communication between clients and servers
  • Network libraries for DB-Library, ODBC, and OLE DB.
  • Documentation and Samples
  • Books Online

SQL Server supports the following types of database backups:

  • Full Backup – It contains a complete backup of the database.
  • Differential backup – Differential backup is also known as incremental backup. It contains changes happen after the last full backup.
  • Log Backup – Log backup contains a backup of the transactional log after last log backup. It contains both committed & uncommitted transactions.
  • Copy-Only Backup – Special backup type that does not impact the sequence of regular backups and gives you a copy of the complete database.
  • Filegroup Backup – SQL gives you the capability of one or more filegroup backup in place of complete database backup.
  • Partial Backup – Partial backup helps you to take a backup of specific data from different files or filegroups.

SQL Server database backups are supported by all 3 recovery models:

  • Simple
  • Full
  • Bulk-logged

Expect to come across this popular SQL Server DBA interview questions.  

With Compressed backups, SQL Server works on the pre-allocation algorithm to determine how much space the compressed backup will require and will allocate this space at the start of the backup process. At the time of backup completion, the backup process might expand or shrink the backup file on the basis of actual requirement.

Pre-Allocation helps SQL Server save the overhead of constantly growing the backup file.

Sometimes when we have a very large database and the difference in uncompressed and compressed backups is very high, preallocated and actual backup space has a significant difference. Trace flag 3042 promote better space management with little over-head to SQL server to grow backup file on a regular basis. This is useful with limited disk space.

Enabling trace flag 3014 may result in a little spike in CPU & memory but not significant to cause performance issues.

Pseudo-simple Recovery Model is situational based recovery model. This is something not given by project but ones with circumstances. It’s a situation where the database may be configured in the FULL recovery model but behaves like a SIMPLE recovery model.

The database will behave like a SIMPLE recovery model until a full database backup is taken after switching the recovery model to Full.

For example, You have a database in Simple Recovery Model and now you have switched it to the FULL recovery model. You need to perform a full backup after switching the recovery model; otherwise, the database will keep behaving like simple and keeps truncating the transaction log on commit.

A must-know for anyone heading into SQL Server interview, this is frequently asked in SQL multiple choice questions.  

You need to restore Last Sunday full backup happen at 2 AM then Wednesday Differential backup of 2 AM followed by all Transaction log backups after. Consider all backups are valid, in case of any bad backup file recovery path, will be changed.

SQL Server Agent provides multiple functionalities like:-

  • SQL Server Agent is a Windows service come with all SQL editions other than express.
  • SQL Server Agent is like task scheduler of OS. It used to schedule jobs/process & reports.
  • SQL Server Agent is also provided inbuilt features of Alerts like Blocking, deadlock, etc.
  • SQL Server Agent support multiple types of process like T-SQL \ CMD  \PowerSheel \ SSAS \ SSIS for scheduled execution.
  • SQL Server Agent provides the proxy option to secure and limit the direct user access on critical sub-systems.

SQL Server agent has lots of features and sometimes you need to give rights on an agent to manage and view jobs. Allowing every one with sysadmin permission is not appropriate.

SQL Server has 3 fixed DB roles for permission on SQL Agent & Jobs.

  • SQLAgentUserRole – Users with SQLAgentUserRole rights can manage Agent jobs created by self. Self-owned jobs can be managed by this rights, he can not view other jobs available on the server.
  • SQLAgentReaderRole – This is the 2nd level of permission on the agent. User with SQLAgentReaderRole rights can only view job available on server but cannot change or modify any jobs. This roll gives the ability to review multi-server jobs, their configurations, and history with SQLAgentUserRole rights.
  • SQLAgentOperatorRole - SQLAgentOperatorRole  roles gives you highest permission on agent. User with this role has the ability to review operators, proxies and alerts, execute, stop or start all local jobs, delete the job history for any local job as well as enable or disable all local jobs and schedules with AQLAgentReaderRole rights.

The Guest user account is created by default with SQL server installation. Guest user is not mapped to any login but can be used by any login when explicit permissions are not granted to access an object. You can drop the guest user from all databases except Master and TempDB.

When any user login to the SQL Server, it has 2 parts Server & database. First, at the Server level, user authentication verifies at the server level and User is able to login to SQL Servers. Second, Login and mapping to the database are verified. In case, Login is not mapped to any user but able to log in on SQL server. SQL automatically map that login to Guest and grant him database access.

One of the security recommendations of Microsoft is to Drop or disable a Guest user in every database except Master & TempDB database. By having Guest user, you are at risk of unauthorized access to data.

It's no surprise that this one pops up often in SQL Server interview questions.  

An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database.

An SQL injection is a web security vulnerability that allowed hackers to access application code and queries used to interact with the database. Hacker uses those methods to retrieve data belonging to other users and data not authorized to them. The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc.

An SQL injection is not a task of one man or team. Complete support and architecture team work together to get it prevented to happen.Developers \ DBAs are responsible for DB security and proper SQL code. Applications developers are responsible for application code and Db access methods. Infra team is responsible for network, firewall & OS security.

Proper SQL instance, OS & Farwell security with a well-written application can help to reduce the risk of SQL injection.

Index Reorg

  • The reorganization is an "online" operation.
  • Reorganization only affects the leaf level of an index
  • Reorganization shuffle data place to rectify existing allocated pages of index
  • The reorganization is always a fully-logged operation
  • Reorganization can be stopped or killed any time, no need for a rollback

Index Rebuild

  • Rebuild is an "offline" operation by default.
  • Rebuild creates a completely new structure of index B-Tree
  • Rebuild uses new pages/allocations
  • Rebuild can be a minimally-logged operation
  • Rebuild can be stopped or killer but it requires rollback to complete transactionally

Each table can have only one Cluster & multiple nonclustered indexes. All nonclustered indexes use the index key of the cluster index or directly depend on the clustered index.

Because of this dependency, we usually got this question if rebuilding the clustered index will rebuild the nonclustered index on a table or not.

The answer is NO. Cluster index rebuild will not rebuild or reorganize the nonclustered index. You need to plan a nonclustered index rebuild or reorganization separately.

A common in MS SQL interview questions, don't miss this one.  

  • Collation of TempDB should be the same as the SQL Server instance collation.
  • TempDB database should be sa.
  • Guest user should not drop or revoke permissions from TempDB database
  • Keep the recovery model SIMPLE only.
  • Configure tempdb files to automatically grow as required.
  • Ensure TempDB drives is with RAID protection in order to prevent a single disk failure from shutting down SQL Server.
  • Keep the TempDB database to separate set of disks
  • TempDB database size should be according to server load.
  • TempDB database data files should be configured as per available C.P.U. Cores.
  •  If no of cores < 8 then no. of data files equals no. of logical processors.
  • If no. of cores is between 8 to 32 inclusive, then no. of data files equals 1/2 data files as logical processors.
  • If no. of cores > 32, then no. of data files equals 1/4 data files as logical processors.
  • Ensure each data file should be of the same size to allow optimal proportional-fill performance.
  • Reside tempdb database on a fast I/O subsystem.
  • Configure auto growth if tempDB data  Log files are to a reasonable size to avoid the tempdb database files from growing by too small a value.
  • Guidelines for setting the FILEGROWTH increment for tempdb files.
  • If TempDB file size > 0 and < 100 MB, then the recommended filegroup increment can be 10 MB.
  • If TempDB file size > 100 and < 200 MB, then the recommended filegroup increment can be 20 MB.
  •  If TempDB file size >= 200 MB, then the recommended filegroup increment can be 10% OR any fixed value depending on the requirement or on the basis of I/O system capabilities.
  • Do not shrink TempDB unless necessary.
  • Do not enable auto-create statistics & auto update statistics.
  • Ensure to have auto close OFF.

One of the most frequently posed SQL Server interview questions for freshers, be ready for it.  

  • Snapshot replication – Snapshot replication works on a snapshot of the published objects. Snapshot agent takes care of it and applies it to a subscriber. Snapshot replication overwrites the data at the subscriber each time a snapshot is applied. It’s best suited for fairly static data or sync interval is not an issue. Subscriber does not always need to be connected.
  • Transactional replication – Transactional replication replicates each transaction for the article being published. For initial setup, Snapshot or backup is required to copy article data at the subscriber. After that Log Reader Agent reads it from the transaction log and writes it to the distribution database and then to the subscriber. It’s the most widely used replication.
  • Merge replication – Merge replication is the most complex types of replication which allow changes to happen at both the publisher and subscriber. Changes happen at publisher & subscriber are merged to keep data consistency and a uniform set of data. For an initial setup like transactional replication, Snapshot or backup is required to copy article data at the subscriber. After that Merge Reader Agent reads it from the transaction log and writes it to the distribution database and then to the subscriber. The merge agent is capable of resolving conflicts that occur during data synchronization.

Push & Pull is a type of Replication configuration. It decides how data will be replicated to the subscriber.

  • Push – In Push subscription, the publisher is responsible for sending data. Publisher pushes data from the publisher to the subscriber. Distributor agent will run on the Distributor to push changes. The distributor can be configured on the Publisher or separate server. Changes can be pushed to subscribers on demand, continuously, or on a scheduled basis.

Pull is best suited configured when the subscriber is connected all the time and need the latest data all the time accessed.

  • Pull - In Pull subscription, Subscribers is responsible for fetching data. Subscribers requests changes from the Publisher. Distributor agent will run on the subscriber to Pull changes. The subscriber can pull data as & when needed.

Pull is best suited when subscribers are not online every time. The subscriber can allow data lag and can wait for delay in data refresh

Log shipping is a Microsoft SQL Server DR solution from the very beginning. In Log Shipping, Backup and restores of a database from one Server to another standby server happens automatically.

Log Shipping works with 3 mandatory & 1 optional Job.

  • Backup Job - Backup Job is responsible for taking transactional log backups on the Primary server. It runs on Primary Server.
  • Copy Job - Copy Job runs on a secondary server and responsible for copying T-Log backups from the primary server to Secondary server.
  • Restore Job - Restore job runs on secondary server to restore T-log backups in sequential order.

These 3 jobs are created separately for each database configured in log shipping.

  • Alert Job – This is an optional job to monitor log shipping threshold and generate notifications. This Job is instance specific.

A staple in SQL server interview questions, be prepared to answer this one.  

  • Log shipping is a disaster recovery solution from Microsoft. Log Shipping comes up with multiple internal tables to refer to its details and monitor current status.
  • log_shipping_monitor_alert – This system table saves alert configuration used to monitor and trigger a notification on violations.
  • log_shipping_monitor_error_detail – This system table shows errors occurred during Log shipping.
  • log_shipping_monitor_history_detail – This system table saves the history of log shipping and it’s status. This can be referred in future for any issues and security report.
  • log_shipping_monitor_primary – These tables save one record per database with backup and monitoring threshold.
  • log_shipping_monitor_secondary - These tables save one record per secondary database with the primary server, primary database, restore and monitoring threshold.
  • log_shipping_primary_databases – This table saves a list of all databases serving as primary database & enabled for Log shipping.
  • Log_shipping_secondary_databases - This table saves a list of all databases serving as a secondary database in Log shipping.

Also, You can use the Transaction Log Shipping Report at Server Level to view the current status.

  • Principal Server:- The main server holding a primary copy of the database and serving client application & requests.
  • Mirror Server:- The secondary server which holds a mirrored copy of Principal database and acts as a hot or warm standby server on basis Synchronous & Asynchronous configuration.
  • Witness Server: - The witness server is an optional server and it controls automatic failover to the mirror if the principal becomes unavailable.

The FQDN (fully qualified domain name) is computer name of each server with the domain name. This can be found running the following from the command prompt:

  • IPCONFIG /ALL
  • Concatenate the “Host Name” and “Primary DNS Suffix”
  • Host Name . . . . . . . . . . . . : XYZ
  • Primary DNS Suffix . . . . . . . : corp.mycompany.com
  • The FQDN of your computer name will be XYZ.corp.mycompany.com.

Advanced

The physical database architecture is a description of the way the database and files are organized in a SQL server.

  • Pages and extents- A page is the size of 8KB and set of 8 Pages are called extent. This is the fundamental unit where data is stored.
  • Physical Database Files and Filegroups- Database files visible at file system to store data and logs.
  • Table and Index Architecture- Database objects inside the database to store & access data.
  • Database- Database is a set of data & Log file which resides over the filesystem and managed by the operating system.
  • SQL Instance- SQL instance is a logical entity controlled databases. One SQL instance can have multiple databases. Security and accessibility is also part of the SQL instance.
  1. One to One(1:1) – For each instance, in the first entity, there is one and only one in the second entity and vice versa. Like - Employee Name and Employee ID, One employee can have only one Employee ID and one Employee ID can be allocated to one person only.
  2. One to Many(1:N) – For each instance, in the first entity, there can be one or more in the second entity but for each instance, in the second entity, there can be one and only one instance in the first entity. Like -  Manager & Employee, One Manager can have many employees but one employee can have only one Manager.
  3. Many to Many(N:N) –For each instance, in the first entity there can be one or more instance in the second entity and vice versa. Like – Employee & Project, One Employee can work on multiple projects and One Project can have multiple employees to work.

Write-ahead transaction log controls & defined recording of data modifications to disk. To maintain ACID (Atomicity, Consistency, Isolation, and Durability) property of DBMS, SQL Server uses a write-ahead log (WAL), which guarantees that no data modifications are written to disk before the associated log record is written to disk.

Write-ahead log work as below:-

  • SQL Server maintains a buffer cache to maintain data pages on retrieval
  • When a page is modified in the buffer cache, it marked as dirty. The page is not immediately written back to disk.
  • A data page can be modified multiple times before written to disk but maintain separate log entry a transaction log record.
  • The log records must be written to disk before the associated dirty page is removed from the buffer cache and written to disk.

This is a regular feature in SQL query interview questions for experienced professionals, be ready to tackle it.  

To understand Row versioning, you should first understand RCSI (Read Committed Snapshot Isolation). RCSI does not hold a lock on the table during the transaction so that the table can be modified in other sessions, eliminate the use of shared locks on read operations.

RCSI isolation maintains versioning in Tempdb for old data called Row Versioning. Row versioning increases overall system performance by reducing the resources used to manage locks.

When any transaction updates the row in the table, New row version was generated. With each upcoming update, If DB is already having the previous version of this row, the previous version of the row is stored in the version store and the new row version contains a pointer to the old version of the row in the version store.

SQL Server keeps running clean up task to remove old versions which are not in use. Until the transaction is open, all versions of rows that have been modified by that transaction must be kept in the version store. Long-running open transactions and multiple old row versions can cause a huge tempDB issue.

Sometimes we have a situation when we cannot perform such activities using SSIS due to multiple types of databases, Different domains, Older version, etc.

We have several commands available to import using T-SQL.

  • BCP – BCP is Bulk copy program used to import a large number of rows into SQL server or to text\csv files.
  • Bulk Insert – Bulk insert is used to data files (text \ csv \ excel) into a database table in a user-specified format.
  • OpenRowSet – OpenRowSet is used to access remote data from an OLE DB data source. This is an alternative method of Linked server for the one-time or ad-hoc connection.
  • OPENDATASOURCE - OPENDATASOURCE is used to access remote data on an ad-hoc basis with 4 part object name without using a linked server name.
  • OPENQUERY – OPENQUERY is used to execute a specified query on the specified linked server. OPENQUERY can be referenced in from clause with INSERT, UPDATE, or DELETE statement on the target table.
  • Linked Servers – Linked Servers are configured to access data outside SQL Server from another SQL instance or DB types (Like Oracle \ DB2, etc.). It has the ability to issue queries & transactions on heterogeneous data sources.

This is a frequently asked SQL server interview question for experienced professionals.  

We need to run the SQL Server patch installation minimally 4 times to patch both SQL instance on all 3 cluster nodes.

  • SQL 2005  - 1 installation, SQL 2005 support remote installation. SQL 2005 patch installation will install the patch on all cluster nodes in one go but only for cluster objects like DB engine or agent.
  • SQL 2008 R2 – 3 installation, SQL 2008 R2 does not support remote installation. We need to patch all 3 nodes separately.

Additionally, We may need to run SQL 2005 setup on other 2 nodes to patch non-cluster objects like SSMS but that’s the additional part.

Full backups are set of the complete database with all data. AT the time of any crash or data recovery, it is the starting point to select which full to sue to plan the recovery path.

To make things clear and doubt free, SQL Server includes all data or transactions data into full backup till the END TIME of backup. If your backup took 2 hours that backup will also contain data changes and transactions that happened in these 2 hours.

A full backup includes all data and transactions at the completion of backup time. The full backup cover the complete transaction with all the changes that were made after the backup start checkpoint to apply those changes during the database recovery process.

Expect to come across this popular question in SQL interview questions.  

  • DCM (Differential Changed map): DCM is used by SQL server for differential backup. DCM pages keep track of all extents which has changed since the last full database backup.

During SQL server differential backup, database engine reads the DCM page and takes a backup of only those pages which has been changed since the last full backup. If the value is 1 means extent has been modified and 0 means not modified.  After each full backup, all these extents are marked to 0.

  • BCM (Bulk Changed map): BCM page is used in bulk-logged recovery model to track extends changed due to bulk-logged or minimally logged operations.

During log backup, the database engine reads BCM page and includes all the extents which have been modified by the bulk-logged process. If the Value 1 means modified extent and 0 means not modified. After each log backup, all these extents are marked to 0

To restore the differential backup of any database, DB needs to be in restoring mode which means DB will not accessible.

The MASTER database is a startup database for any SQL server instance. SQL instance will be in an offline state if the MASTER database is not accessible.

If we combined both statements, We can see that differential backup of the MASTER database is unnecessary as we can not restore. That’s why SQL server will not allow you to do take a differential backup of MASTER DB.

SQL Server provides the feature of managing jobs from Master \ central server on target servers called multi-server administration. Jobs and steps information is stored on Master Server. When the jobs complete on the target servers notification is sent to the master server so this server has the updated information.  This is an enterprise level solution where a consistent set of jobs need to run on numerous SQL Servers.

As the name implies, SQL Server Agent Proxy is an account that grant privilege to the user to execute a particular process or an action when a user does not have rights. The SQL Server Agent Proxies include multiple sub-systems:

  • ActiveX Script – Access to run ActiveX Scripts
  • Operating System (CmdExec) – Access to run Command line scripts
  • Replication Distributor – Replication Agent Rights
  • Replication Merge – Replication Agent Rights
  • Replication Queue Reader – Replication Agent Rights
  • Replication Snapshot – Replication Agent Rights
  • Replication Transaction-Log Reader – Replication Agent Rights
  • Analysis Services Command - SSAS execution rights
  • Analysis Services Query - SSAS execution rights
  • SSIS Package Execution – SSIS package execution rights
  • Unassigned Proxies – If required option is not able, You can select this like Other option.

All these sub-systems are available under Proxy in SSMS and you can create as per your requirement.

To secure your SQL Server instance, it’s advisable to hide your SQL instance. SQL Server instance can be marked as hidden from the SQL Server Configuration Manager.

SQL Server Configuration Manager > Select the instance of SQL Server, > Right click and select Properties > After selecting properties you will just set Hide Instance to "Yes" and click OK or Apply.

You need to restart the instance of SQL Server.

An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database.

The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc.

Proper SQL instance, OS & Farwell security with the well-written application can help to reduce the risk of SQL injection.

Development\DBA

  • Validate or filter the SQL commands that are being passed by the front end
  • Validate data types and parameters
  • Use stored procedures with parameters in place of dynamic SQL
  • Remove old installable from application & database servers
  • Remove old backup, application files & user  profiles
  • Restrict commands from executing with a semicolon, EXEC, CAST, SET, two dashes, apostrophe, special characters, etc.
  • Restrict the option of CMD execution or 3rd party execution
  • Limited or least possible rights to DB users

Infra\Server

  • Latest Patches
  • Restricted Access
  • Updated Antivirus

Network Administration

  • Allow traffic from required addresses or domains
  • Firewall settings to be reviewed on a regular basis to prevent SQL Injection attacks

Powershell is windows scripting and powerful enough to manage things from the core in a very efficient manner. Powershell help in deep automation and quick action on DBA activities. The new face of DBA automation.

We can perform multiple actions from a DBA perspective using Powershell, like:-

  1. Check the status of SQL Server service or SQL Server Agent service
  2. Start/stop a SQL Server service
  3. Find the SQL Server version/edition including the service pack level
  4. Find the SQL Server operating system information such as the OS version, processor number, physical memory, etc.
  5. Perform Backups
  6. Script out a SQL Server Agent Job, based on a specific category
  7. Kill all sessions connected to a SQL Server database

Update Statistics has performed a recalculation of query optimization statistics for a table or indexed view. Although, with Auto Stats Update option, Query optimization statistics are automatically recomputed, in some cases, a query may benefit from updating those statistics more frequently. UPDATE STATISTICS uses tempdb for its processing.

Please note that update statistics causes queries to be recompiled. This may lead to performance issues for initial execution.

You can perform Update Statistics by 2 methods:

  • UPDATE STATISTICS  - This need Alter right on the table and has more controlled options of performing an update only for one table or specific stats. This can’t be used for the complete database in one go.
  • SP_UPDATESTATS - This need sysadmin rights on SQL instance. This can help in performing update stats for all the stats of all table in the database.

A must-know for anyone heading into SQL Server interview, this is frequently asked in MS SQL interview questions.  

If a column in the table is having >50% NULL values then index selection if very selective.

Index schema is based on B-Tree structure and if the column is having more than 50% NULL values then all data will reside on one side of the tree result ZERO benefits of the index on query execution.

The SQL server is having a special index called filtered index which can be used here. You can create an index on column only on NON NULL values. NULL data will not be included in the index.

By Default, SQL Server works in Simple Parameterization where  SQL Server internally will add parameters to an SQL statement executed without parameters so that it can try to reuse a cached execution plan. But Simple parametrization has lots of limitation but did not work in the following cases:-

  • JOIN
  • IN
  • BULK INSERT
  • UNION
  • INTO
  • DISTINCT
  • TOP
  • GROUP BY
  • HAVING
  • COMPUTE
  • Sub Queries

To force SQL Server to have Auto Parameterization for every SQL execution, You need to enable Force Parameterization at the database level.

Force Parameterization settings need proper testing as it may also create an adverse impact on performance.

  • Snapshot Agent- The Snapshot Agent is used with all types of replication for the initial copy of an article from publisher to the subscriber. It generates schema copy and bulk copy files of published tables and other objects. Later same is applied to the subscriber. Snapshot Agent runs at the Distributor.
  • Log Reader Agent - Log Reader Agent is used with transactional replication. It’s responsible for moving transactions marked for replication from the transaction log on the Publisher to the distribution database.
  • Distribution Agent - Distribution Agent is responsible for applying Snapshot & Transaction logs at subscriber moved to the distributor by Snapshot or Log Reader agent. Distribution Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Merge Agent - Merge Agent is used by merge replication to move initial snapshot followed by movement and reconciliation of incremental data changes. Each merge subscription has its own Merge Agent works with both the Publisher and the Subscriber and updates both. Merge Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Queue Reader Agent - Queue Reader Agent works with transactional replication with the queued updating option. Queue Reader agent runs at the Distributor and moves changes made at the Subscriber back to the Publisher. Only one instance of the Queue Reader Agent exists to service all Publishers and publications for a given distribution database.

In the existing replication setup, Sometimes we need to add new articles. When you perform such operations in existing setup, it will mark your snapshot invalid which results in snapshot re-initialize for all articles.

This sounds normal but considers the situation where a number of articles are very high or the size of the remaining article is huge, reinitialization can bring down your system.

SQL server provides you option of generating a snapshot of only newly added article. This can be achieved by T-SQL like:-

  • Set allow_anonymous property of the publication to FALSE
  • Disable Change immediate_sync on the publication
  • Invalidate the snapshot by setting property@force_invalidate_snapshot=1 using sp_addarticle
  • Refresh subscriptions using sp_refreshsubscriptions
  • Start Snapshot Agent using Replication monitor
  • You can notice that bulk-insert statements are created only for the specific article instead of all articles
  • Start log reader agent
  • Re-enable Change immediate_sync on the publication
  • Change back allow_anonymous property of the publication to TRUE

Log shipping may fail due to multiple issues can cause data long, Here are some possible reasons:

  • Changes in shared folder or share access permissions – Copy Job is responsible for copying log backup from primary to the secondary server. If you have to change the shared folder permissions on the primary server, Copy job will not able to access the share of primary server to copy Log backups.
  • A human error like someone deletes the T-log backup file or truncate T-log on the primary server – This is the most common issue. If someone has deleted Log backup by mistake log chain will break and restore might fail. Also if Someone truncates the T-Log file on primary than log chain will break and new log backup will not generate.
  • Low Drive free space is low on secondary – Standard approach is to have similar drive structure and space on secondary like primary. If secondary has less drive space that it may get full and impact copy or restore process.
  • Low I/O, Memory, Network resources  - Copy & Restore job needs resource and if you have a long list of databases then you need high resources. Secondary server with low network \ IO or memory can cause server slowness \ crash or delay in the restore.
  • TUF file is missing – TUF is transaction undo file which contains active transaction details. If TUF file is deleted and you do not have a backup then you have to reset shipping.
  • MSDB database is full – MSDB keeps track of log shipping & restore history. If MSDB got full then Copy & Restore jobs will start failed.

TUF means Transaction Undo file used when log shipping is configured in standby mode on secondary.

TUF contains details of transaction not applied on secondary in last Log restore. Basically, these transactions were not committed to the Primary database when the Transaction log backup was in progress. On Next T-Log restore on secondary, Log shipping refer TUF file or state of active transactions.

If TUF is missing, You can’t recover your log shipping.

It's no surprise that this one pops up often in MS SQL server interview questions.  

We have 3 Operating Modes in DB Mirroring.

  • High Availability:- High-Availability operating mode is a combination of principle + mirror + witness with Synchronous. In Synchronous, The Principal server sends the log buffer to the mirror server and then waits for a response from the mirror server. Witness server will monitor principal & Mirror Server, In case Principal is not available the witness and mirror will decide automatic failover to bring mirror online.
  • High Protection:- High-Protection operating mode is a combination of principle + mirror with Synchronous. The Principal server sends the log buffer to the mirror server and then waits for a response from the mirror server but Automatic failover is not possible.
  • High Performance:- High- Performance operating mode runs asynchronously and the transaction safety set to off. The Principal server sends the log buffer but does not wait for a response from the mirror server for acknowledgment. Mirror server can lag behind from principal and cause data loss.

We need to perform force failover to bring mirror online when Principle is down. We can perform this using below command:

ALTER DATABASE <DB Name> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS

If you try to do failover like the normal situation when the principle is online [ALTER DATABASE <DB Name> SET PARTNER FAILOVER] then you will receive below error.

ALTER DATABASE <DB Name> SET PARTNER FAILOVER

Msg 1404, Level 16, State 10, Line 1 Above command failed because the database mirror is busy. Reissue the command later.d later.

A common in SQL server interview questions, don't miss this one.  

AlwaysOn provided you with the feature of Readable replica. This is a long-awaited feature where you can utilize your passive node.

Unlike cluster where your passive node consume all your resources but you cannot utilize them before the primary node goes.

Benefits:

  • Offloads your read-only workload from primary to secondary.
  • Utilize your resources of secondary mode
  • Temporary statistics on readable secondary help in performance tuning on secondary.
  • Offload T-log backups to secondary reduce backup load.

The Remote Procedure Call (RPC) is a system service for interprocess communication (IPC) between different processes. This can be in over on the same computer, on the LAN, or in a remote location, and it can be accessed over a WAN connection or over a VPN connection.

RPC working on a dynamic range of ports and connect using any available port from the available range. There is a long list of services that depends on RPC like Telnet, DHCP, DNS, COM+, DTC, WINS, etc.

You may face multiple errors when RPC is not working like

  1. Cluster name mention not found
  2. Node1 not able to communicate to Node 2
  3. File Share not working
  4. MSDTC issues
  5. Etc.

An execution plan, basically, is the consequence of the query optimizer endeavor to ascertain the most proficient approach to implement the request represented by the SQL query you submitted. They are, in this way, the DBA's essential methods for investigating an ineffectively performing query. Instead of conjecture at why a given inquiry is performing a large number of outputs, putting your I/O through the rooftop, you can utilize the execution intend to distinguish the precise bit of SQL code that is causing the issue. For instance, it might filter a whole table-worth of information when, with the best possible list, it could basically backpack out just the row you need. This and more are shown in the execution plan. 

Despite the fact that SQL Server more often than not creates a decent arrangement, now and again it's not sufficiently brilliant to approve its arrangements and fix the poor ones. You can get an expected execution plan and a real graphical execution plan in SQL Server. Produce these plans by utilizing the command ctrl M or ctrl L or by utilizing the symbols, which are put to one side of the execute symbol on the standard device bar of SQL Server Management Studio (SSMS).

There are two sorts of execution plans:

  • Estimated execution plan: Estimated plans give an estimation of the work that the SQL server is required to perform to get the information.
  • Actual execution plan: Actual execution plans are produced after the Transact-SQL questions or batches are executed. Along these lines, a real execution plan contains run time data, for example, the genuine resource usage metrics and any run time alerts.  You ought to always check the actual execution plan while investigating.

When you launch SQL Server Management Studio (SSMS), you see the choice to connect with a DB instance. Whenever wanted, you can browse instances running on your system. Simply click on the drop down and at the base, there is a ‘Browse for more…’ alternative:

SQL Server Management Studio

This enables you to look for local or network instances. I have four instances running on my workstation you see them showed in this list:

network instances

We should expect one of these instances is top secret and we don't need clients to see the instance name. That is conceivable through SQL Server Configuration Manager (SSCM). Open up SSCM and explore to your SQL Server instances:

select the instance of SQL Server, right click and select Properties. Subsequent to choosing properties you will simply set Hide Instance to "Yes" and click OK or Apply. After the change is made, you have to restart the instance of SQL Server to not uncover the name of the instance.

SQL Server configuration manager

SQL Server configuration manager

Protocols for SQL2014 properties

The answer to this question depends on the View type. There are two types of Views that exist. One is Simple View created from a single table and which does not have any derived columns based on in-built grouping functions like average, sum, etc and the other one is a complex View. We can delete rows from views if it is a simpler one but not from Complex one. Let me try to make you understand by going into VIEW definition and syntax.

In SQL, a view is a virtual table dependent on the outcome set of a SQL query. A view contains rows much the same as a real physical table. The fields in a view are fields from at least one real table in the database. You can include SQL functions, WHERE, and JOIN clauses to a view and present the information as though the information were originating from one single table.

SQ; code

There are two types of views that exist. One is the normal one (derived from a single table) and the other one is a complex one (derived from multiple tables). We can delete/update data from view only when it is a normal one and does not have any derived columns based on AVG, COUNT, SUM, MIN, MAX, etc. We can not delete rows from a complex view.

This is a regular feature in SQL query interview questions for experienced professionals, be ready to tackle it.  

These are two approaches for taking database backup. One approach supports the offline mode while the other supports online mode. The cold back up is the one which deals in offline mode while copying data and log files. This is not standard practice to bring your database offline and take a back up. But there are situations where we need to bring our databases offline before taking a back up. There is a very simple command to do the same (BACKUP DATABASE). We want to ensure our data is in a consistent state before back up as there might be a scenario where any running query has inserted an entry in the log but yet to make an entry in data files.  Please find below scenarios where we go with a cold back up approach:

hot and cold backup in SQL server

The hot backup is just a reversal of the cold one. In a real-world scenario, we do not want to suffer business losses by bringing our database offline and at the same time, we want to keep a copy of the database to recover from any disaster situation.

There are lots of things that need to be considered before going into designing a logical and physical data model. We must consider first what type of data we are going to store, how the data is being accessed, understanding system playing the role of upstream, understanding downstream system and finally planning by understanding volume of data that is going to come on a daily   If we are going to upgrade any existing platform then understanding existing pain area also help developers to design the database to meet future needs along with remediating those bottlenecks.Understanding your data is important but unless we have clarity of different components of data, we would not be able to make a better plan.

Also, we need to revisit our design at several stages of a project as we work in an evolving world where requirement changes overnight. So Agility and flexibility of database design are very important. We should be able to meet any future requirements as well with a flexible approach. We should always revisit data relationships, volume and indexes to ensure we stick to the flexible requirement. Also, we should always frequently profile our database server using tools like SQL server profiler to identify any problem areas on a regular basis.

The extended stored procedure is programs written in c/c++ and similar to a stored procedure like they accept parameter but the result set is returned through SQL server’s open data services API. It runs from SQL server process memory only and stored in a master database. It does not run from the current database context and to trigger extended stored procedure we need to pass the fully qualified name like master.dbo.xp_*. The best is wrapping extended stored proc under system procedure so you do not need to call extended one with the fully qualified name.

SQL Code

The cleaning of parameters to be passed to extended stored procedure xp_varbintohexstr  is taken care of by

sp_hexstring. sp_hextsring can be executed from any database context without being referenced by a qualifier. The above is an example of wrapping extended procedures inside the system procedure. Once you call the extended stored procedure, the request transfers in the tabular data stream or in SOAP format from calling application to SQL server.

The performance of any application largely depends upon the backend process which extracts data. If it is faster then the application will be also responsive. The designing of the stored procedure is very critical and need to be careful to remove any  We can consider the following points while designing stored procedure :

  1. Variables: We should minimize the use of variables as it is stored in the cache.
  2. Dynamic queries: We should minimize the use of dynamic query as dynamic query gets recompiled every time parameter gets changed.
  3. The stored procedure should be always called with fully qualified names database_name.schema_name.sp_name , this avoids the search of an execution plan in the procedure cache.
  4. We should always use SET NOCOUNT ON which suppresses rows affected to improve the performance of the query. This is very critical in case SP is called frequently as not using the above syntax can load the network.
  5. We should not use sp_ in stored procedure name as it is used for system procedure so it causes an extra trip to the master database to see if any system procedure matches with the user-defined procedure
  6. We can use sp_executeSQL or KEEPFIXED PLAN to get away with recompilation of stored procedures even if we have parameterized dynamic queries.

 KEEPFIXED PLAN code

  1. We should be careful in choosing WHERE condition as it triggers index seek. Sometimes we might end up doing full table scans if were conditions not chosen carefully.
  1. We should avoid using IN operators as it checks for NULL values as well. We should use EXISTS as it does not consider NULL values. The below query second will give faster results as compared to the first one.

Code

  1. If not necessary we should avoid using DISTINCT and ORDER BY clause as it is an additional load on the database engine.
  1. We should avoid CURSORS. Instead of CURSORS, we should use temp tables /table variables inside the loop to achieve the desired result set.

This is a frequently asked SQL Server interview question.  

SQL Server cursors are immaculate when we need to work one record at any given moment, as opposed to taking every one of the information from a table as a solitary bulk. Be that as it may, they ought to be utilized with consideration as they can influence execution, particularly when the volume of information increments. From an amateur's perspective, I truly do feel that cursors ought to be maintained a strategic distance from each time in such a case that they are badly composed, or manage an excess of data, they truly will affect system performance. There will be times when it is absurd to expect to evade cursors, and I doubt if numerous system exists without them. In the event that you do discover you have to utilize them, attempt to lessen the number of records to process by utilizing a temp table first, and afterward assembling the cursor from this. The lower the number of records to process, the quicker the cursor will wrap up.  Please find below syntax for CURSORS widely used :

CURSOR

The process flow of CURSOR look like below diagram:
process flow of CURSOR

Locks allow seamless functioning of the SQL server even in concurrent user sessions. As we as a whole know, different clients need to get to databases simultaneously. So locks come for rescue to keep information from being undermined or negated when numerous clients endeavor to do data manipulation tasks DML operations, for example, read, compose and update on database. "Lock is characterized as a component to guarantee information integration, consistency while enabling simultaneous access to information. It is utilized to execute simultaneous control when various clients get to Database to control its information in the meantime".

Locks can be applied to various database components. Please find below areas where a lock can be applied :

various database components

  • RID: (Row ID) This helps us in locking a single row inside a table.
  • Table: It locks the whole table, even data, and indexes as well.
  • Key: It intends to lock key available in tables. It implies the primary key, Candidate Key, Secondary key and so forth.
  • Page: The page represents an 8-kilobyte (KB) data page or index page. The lock can be placed on Page Level additionally, it implies if a specific page is locked so another client can't refresh the information on it.
  • Extent: Extent is represented by a Contiguous gathering of eight data pages that can incorporate index pages moreover.
  • Database: Entire Database can be locked for some sort of clients who have read authorization on the database.

The locks which get applied on various resources of SQL server can be classified into below models :

  • Exclusive (X)This lock type, when forced, will guarantee that a page or row will be available only for the transaction that forced the lock, as long as the transaction is running. The X lock will be forced by the transaction when it needs to manipulate the page or row data by DML operations like inserting, modifying and deleting. This lock can be forced to a page or row just if there is no other shared or exclusive lock forced as of now on the resources. This ensures that only one exclusive lock can be applied and no other lock can be applied afterward till the time the previous lock gets removed.
  • Shared (S)This lock type, when forced, will save a page or row to be accessible just for reading, which implies that some other transaction will be not allowed to manipulate record till the time lock remains active. Nonetheless, a shared lock can be forced by multiple transactions concurrently over a similar page or row and in that manner, multiple transactions can share the capacity for data reading. A shared lock will permit writing tasks, yet no DDL changes will be permitted
  • Update (U): an update lock is like an exclusive lock however is intended to be more adaptable. An update lock can be forced on a record that as of now has a shared lock. In such a case, the update lock will force another shared lock on the intended   When the transaction that holds the update lock is prepared to change the data, the update lock (U) will be changed to an exclusive lock (X). While the update lock can be forced on a record that has the shared lock but the shared lock can't be forced on the record that as of now has the update lock
  • Intent (I): The idea behind such a lock is to guarantee data modification to be executed appropriately by stopping another transaction to gain a lock on the next in the hierarchy object. Generally, when a transaction needs to obtain a lock on the row, it will gain an intent lock on a table, which is a higher chain of the intended object. By obtaining the intent lock, the transaction won't enable other transactions to procure the exclusive lock on that table.
  • Schema (Sch): This lock is applied on a table or index when we want to manipulate any changes in that resource. We can have only one Schema lock at a given point of time. This lock gets applied when we perform operations that depend on the schema of an object.
  • Bulk update (BU): This lock is required when bulk operations need to perform. At the point when a bulk update lock is gained, different transactions won't most likely access a table during the mass load execution. Be that as it may, a bulk update lock won't avoid another bulk update to be executed in parallel.

Locking hierarchy of database objects  can be understood by the below diagram and lock is always obtained from top to bottom:

hierarchy of database

Below is the lock compatibility matrix between different modes of lock available:
matrix between different modes of lock available

This is the strategy used by SQL server to avoid locking a large number of database resources. The locking on resource use lot of memory spaces. Let's try to understand this by a use case where we have to apply locks on 30,000 records where each record size is 500 bytes for triggering DELETE operation. If we consider the memory space requirement for the above use case then we would be requiring one shared lock on database, one intent lock on a table, exclusive locks on the pages would be in the range of 1875 and around 30000 exclusive locks on rows. If we consider 96 bytes size for each lock then total memory space requirement would be 3MB for single delete operation.

 Lock escalation in SQL

To avoid such a  SQL server leverage lock escalation strategy. This will prevent the need of large memory space requirement by escalating those locks to single lock instead of many locks. So in cases where we need thousands of locks on many resources, lock escalation will ensure single lock which will fulfill our objective and at the same time taking care of memory space issue. The exclusive lock on the table will ensure that we do not need page level lock to ensure data integrity. Instead of having so many locks required on many columns and pages lock, SQL Server will escalate to the exclusive lock on a table.

 Lock escalation in SQL

An optimistic lock is the widely used strategy utilized in n tier applications were while reading data from the database we check versioning of data before writing it back to the database. The versioning is done on a date, timestamps or checksum/hashes. This approach is used to prevent dirty read scenario where we take a decision based on data which is not yet committed in the database. 

The optimistic approach will ensure that in case dirty read is identified based on versioning we will start from fresh. This strategy is popular where system deals with a high volume of data and even in the case of n tier applications where we are not always connected to a database with single connections. It is a pool of connection from which we connect to the database with any connection which is free and available at that time. We can not apply lock in such cases. This strategy is applied in most of the banking operations.

Pessimistic is just opposite of Optimistic lock as it takes an exclusive lock on resources till the time we finish with our operations. It keeps data integrity high but performance will be always slower in this case. We need to connect to a database with an active connection (which is the case in two-tier application).

optimistic and pessimistic lock in SQL

Expect to come across this popular SQL Server interview question.  

The performance of the SQL query gets improved by NOLOCK as it does not lockup rows so it will not affect reading /processing data from the table. The NOLOCK can be used while dealing with large tables having a million records and with a table where data rarely gets changed. If we have too many locks in SQL then row level locking can be extended to block and even on table level which might stall processing of query till the time operation which acquired lock does not get completed. We can extract committed and uncommitted data using NOLOCK. This situation may result in dirty read if the system has not been designed to handle such a scenario.

Generally to avoid dirty read we use the TIMESTAMP column which can be referenced while executing DELETE /UPDATE commands to know if data has been changed or not. This helps us in preventing data changes based on dirty read problem. Comparing each column values is very  expensive operation so this is the reason industry standard suggest TIMESTAMP column

Essentially we have to identify the bottleneck which is filling the log file.

As a quick resolution check all possible solutions as below:

  • Resolve if there are any errors in the log reader agent/distribution agent
  • Fix if there are any connectivity issues either between publisher-distributor or distributor
  • Fix if there are any issues with I/O at any level
  • Check if there is any huge number of transactions pending from publisher
  • Check if there are any large number of VLFs (USE DBCC Loginfo)which slows the log reader agent work.
  • Check all database statistics are up-to-date at the distributor. Usually, we do switch off this “Auto Update Stats” by default.
  • To find and resolve these issues we can use “Replication Monitor”, “DBCC Commands”, “SQL Profiler”, “System Tables / SP / Function”.

If in case we can’t resolve just by providing a simple solution we have to shrink the transaction log file. Below are two methods.

To shrink the transaction log file:

  1. Backup the log — So transactions in vlf’s are marked as inactive
  2. Shrink the log file using DBCC SHRINKFILE – Inactive VLF’s would be removed
  3. If you find no difference in size repeat the above steps 1 and 2

To truncate the transaction log file:

In any case, we are not able to provide the solution against the increasing logfile the final solution is to disable the replication, truncate the log and reinitialize the subscribers.

  1. Disable replication jobs
  2. Execute the SP_ReplDone procedure. It disables the replication and mark as “Replicate done” for all pending transactions at the publisher.
  3. Backup the transaction log “WITH TRUNCATE” option.
  4. Shrink the log file using “DBCC SHRINKFILE”
  5. Flues the article cache using “sp_replflush”.
  6. Go to distributor database and truncate the table MSRepl_Commands
  7. Connect to replication monitor and reinitialize all subscriptions by generating a new snapshot.
  8. Enable all replication-related jobs.

Yes! We can do that. Follow the below steps to publish a new article to the existing publication.

There are two parameters that we need to change to “False”. 1. Immediate Sync and 2. Allow_Ananymous.

Both the fields were set to ON by default. If the Immediate_sync is enabled every time you add a new article it will cause the entire snapshot to be applied and not the one for the particular article alone.

Steps:

  1. Change the values to “True” for publication properties “Immediate_Sync” and “Allow_Anonymous” using SP_CHANGEPUBLICATION
  2. Add a new article to the publication using SP_AddArticle. While executing this procedure along with the required parameters also specify the parameter “@force_invalidate_snapshot=1”.
  3. Add the subscriptions to the publication for the single table/article using “SP_ADDSUBSCRIPTION”. While executing this proc specify the parameter “@Reserved = Internal”. Generate a new snapshot which only includes the newly added article.

Distributed transactions are transactions that worked across the databases, instances in the given session. Snapshot isolation level does not support distributed transactions.

We can explicitly start a distributed transaction using “BEGIN DISTRIBUTED TRANSACTION <TranName>”

For example, if BEGIN DISTRIBUTED TRANSACTION is issued on ServerA, the session calls a stored procedure on ServerB and another stored procedure on ServerC. The stored procedure on ServerC executes a distributed query against ServerD, and then all four computers are involved in the distributed transaction. The instance of the Database Engine on ServerA is the originating controlling instance for the transaction.

When a distributed query is executed in a local transaction, the transaction is automatically promoted to a distributed transaction if the target OLE DB data source supports ITransactionLocal. If the target OLE DB data source does not support ITransactionLocal, only read-only operations are allowed in the distributed query.

In order to work with these transactions, make sure below settings are done.

  1. MSDTC must be running on all supported instances
  2. Choose the option “No authentication required” from MSDTC properties
  3. Turn on random options at linked server properties like “RPC”, “RPC Out”, “Data Access” etc.

We can check the current settings and thread allocation using the below queries.

  • Thread setting

select max_workers_count from sys.dm_os_sys_info.

  • Active threads

select count(*) from sys.dm_os_threads.

  •  The default value is 255.

Increasing the number of worker threads may actually decrease the performance because too many threads cause context switching, which could take so much of the resources that the OS starts to degrade in overall performance.

A common in advanced SQL Server interview questions, don't miss this one.  

When you find that the last full backup is corrupted or otherwise unrestorable, making all differentials after that point useless. You then need to go back a further week to the previous full backup (taken 13 days ago) and restore that, plus the differential from 8 days ago, and the subsequent 8 days of transaction logs (assuming none of those ended up corrupted!).

If you’re taking daily full backups, a corrupted full backup only introduces an additional 24 hours of logs to restore.

Alternatively, a log shipped copy of the database could save your bacon (you have a warm standby, and you know the log backups are definitely good).

One of the most frequently posed SQL query interview questions for experienced professionals, be ready for it.  

Are you the kind of DBA who rebuilds all indexes nightly? Your differential backups can easily be nearly as large as your full backup. That means you’re taking up nearly twice the space just to store the backups, and even worse, you’re talking about twice the time to restore the database.

To avoid these issues with diff backups, ideally, schedule the index maintenance to happen right before the full backup.

.TUF file is the Transaction Undo File, which is created when performing log shipping to a server in Standby mode.

When the database is in Standby mode the database recovery is done when the log is restored, and this mode also creates a file on the destination server. TUF extension which is the transaction undo file.

This file contains information on all the modifications performed at the time backup is taken.

The file plays an important role in the Standby mode… the reason is very obvious while restoring the log backup all uncommitted transactions are recorded to the undo file with only committed transactions written to disk which enables the users to read the database. So when we restore the next transaction log backup; the SQL server will fetch all the uncommitted transactions from undo file and check with the new transaction log backup whether committed or not.

If found to be committed the transactions will be written to disk else it will be stored in undoing file until it gets committed or rolled back.

If .tuf file is got deleted there is no way to repair log shipping except reconfiguring it from scratch.

A staple in SQL server interview questions, be prepared to answer this one.  

Description

SQL is a database computer language designed to interact with a database. The interaction part includes updating, deleting and requesting information from database. SQL is ISO and ANSI standard and various database products like Oracle and Microsoft SQL Server support SQL. It is used in every industry to deal with complex databases. 

The rising demand of SQL has created lots of opportunities for the SQL developers from many prime companies around the world. According to a recent report released by Statistica, SQL is the most popular database management systems (DBMS) in the world and the market is expected to grow by 70% in the year 2020. So, you have a chance to go ahead make your career in SQL. 

SQL Developers are paid highly. According to Indeed, they earn an average of $85,483 per year and can play the roles like Database developer, Data/ETL Developer Analyst (SQL Server), SQL Database developer, SQL Server Developer, Data Visualization Developer, Business Intelligence developer, and so on. Amdocs, IBM, Oracle, Wipro, Infosys, TCS, CTS, are the top companies hire SQL Developers. 

These PL/SQL interview questions are very common and specially framed for basic to advanced level individual to clear any type of tricky SQL job interview. These top SQL interview questions and answers will increase your confidence level and let you ace the interview easily. 

Preparing with these interview questions for SQL will aid you achieve your dream job and face the toughest of SQL questions in the best possible way. PL/SQL interview questions are framed by the experts and proved to be more effective questions.  

So, feel confident during an interview with these SQL interview questions and answers given below. All the best and excel the interview!

Read More
Levels