Tech Point Fundamentals

Friday, June 3, 2022

SQL Interview Questions and Answers - Part 02

SQL Interview Questions and Answers - Part 02


Are you preparing for the SQL Interview? If yes, then you are at the right place. This is the SQL Interview Questions and Answers article series. Here we will see Top 100+ SQL Interview Questions with Answers. 

Please visit our YouTube Channel for Interviews and other videos by below link:

Please read the complete Design Pattern, C#, MVC, WebAPI, and .Net Framework Interview Questions and Answers article series here.


This is the 2nd part of the SQL Interview Questions and Answers article series. Each part contains eight to ten SQL Interview Questions with Answers. Please read all the SQL Interview Questions list here.

I will highly recommend to please read the previous parts over here before continuing the current part:

SQL Interview Questions and Answers - Part 02

Q09. What is the SQL Profiler?

SQL Profiler is a GUI software tool in Microsoft's SQL Server to monitor, analyze, troubleshoot, and tune SQL databases and their environment. Events are saved in a trace file that can later be analyzed or used to replay a specific series of steps when diagnosing a problem.

Use of SQL Profiler: SQL Server Profiler is used for activities such as:

  • Stepping through problem queries to find the cause of the problem.
  • Finding and diagnosing slow-running queries.
  • Capturing the series of Transact-SQL statements that lead to a problem. 
  • Monitoring the performance of SQL Server to tune workloads.
  • Correlating performance counters to diagnose problems.

To use SQL Server Profiler, you need to understand the terms that describe the way the tool functions:

Event: An event is an action generated within an instance of SQL Server Database Engine. All of the data generated by an event is displayed in the trace in a single row. For example Login connections, T-SQL commands, stored procedure start end, open cursor etc.

EventClass: An event class is a type of event that can be traced. The event class contains all of the data that can be reported by an event.  For example SQL: BatchCompleted, Audit Login and Logout, Lock: Acquired and Released.

DataColumn: A data column is an attribute of an event class captured in the trace. Because the event class determines the type of data that can be collected, not all data columns are applicable to all event classes.

Template: A template defines the default configuration for a trace. Specifically, it includes the event classes you want to monitor with SQL Server Profiler.

Trace: A trace captures data based on selected event classes, data columns, and filters. 

Filter: When you create a trace or template, you can define criteria to filter the data collected by the event. To keep traces from becoming too large, you can filter them so that only a subset of the event data is collected. 

You can start the Profiler within SQL Server Management Studio or with the Azure Data Studio using the SQL Server Profiler extension.

Q10. What is SQL Server Agent?

SQL Server Agent is a Microsoft Windows service that executes scheduled administrative tasks, which are called jobs in SQL Server. SQL Server Agent uses SQL Server to store job information. Jobs contain one or more job steps. Each step contains its own task, for example, backing up a database. SQL Server Agent can run a job on a schedule, in response to a specific event, or on-demand. 

By default, the SQL Server Agent service is disabled when SQL Server is installed unless the user explicitly chooses to autostart the service. SQL Server Agent uses the following components to define the tasks to be performed, when to perform the tasks, and how to report the success or failure of the tasks.

Jobs: A job is a specified series of actions that SQL Server Agent performs. Use jobs to define an administrative task that can be run one or more times and monitored for success or failure. A job can run on one local server or on multiple remote servers.

Schedules: A schedule specifies when a job runs. More than one job can run on the same schedule, and more than one schedule can apply to the same job. A schedule can define the following conditions for the time when a job runs:

  • Whenever SQL Server Agent starts.
  • Whenever CPU utilization of the computer is at a level you've defined as idle.
  • One time, at a specific date and time.
  • On a recurring schedule.

Alerts: An alert is an automatic response to a specific event. You define the conditions under which an alert occurs. An alert can perform the following actions: Either Notify one or more operators or Run a job.

Operators: An operator defines contact information for an individual responsible for the maintenance of one or more instances of SQL Server. In some enterprises, operator responsibilities are assigned to one individual. An operator doesn't contain security information and doesn't define a security principle. SQL Server can notify operators of alerts through email, pager, or net send.

Q11. What are the MASTER and TEMPDB databases? What is the use of these databases?

Both  MASTER and TEMPDB are system databases. SQL Server includes the following system databases:

Master Database: It records all the system-level information for an instance of SQL Server.

MSDB Database: It is used by SQL Server Agent for scheduling alerts and jobs.

Model Database: It is used as the template for all databases created on the instance of SQL Server. Modifications made to the model database, such as database size, collation, recovery model, and other database options, are applied to any databases created afterward.

Resource Database: It is a read-only database that contains system objects that are included with SQL Server. System objects are physically persisted in the Resource database, but they logically appear in the sys schema of every database.

Tempdb Database: It is a workspace for holding temporary objects or intermediate result sets.

SQL Server does not support users directly updating the information in system objects such as system tables, system stored procedures, and catalog views. Instead, SQL Server provides a complete set of administrative tools that let users fully administer their system and manage all users and objects in a database. 

For Azure SQL Database single databases and elastic pools, only master Database and Tempdb Database apply.

Q12. Which TCP/IP port does SQL Server run on? How can you change it?

The default instance of the SQL Server Database Engine listens on TCP port 1433. Named instances of the Database Engine and SQL Server Compact are configured for dynamic ports. This means they select an available port when the SQL Server service is started.

When you are connecting to a named instance through a firewall, configure the Database Engine to listen on a specific port, so that the appropriate port can be opened in the firewall. Because port 1433 is the known standard for SQL Server, some organizations specify that the SQL Server port number should be changed to enhance security. 

Q13. What is a Linked Server? How can you create it?

Linked servers enable the SQL Server Database Engine and Azure SQL Managed Instance to read data from the remote data sources and execute commands against the remote database servers outside of the instance of SQL Server.

Typically linked servers are configured to enable the Database Engine to execute a Transact-SQL statement that includes tables in another instance of SQL Server, or another database product such as Oracle.

Linked servers enable you to implement distributed databases that can fetch and update data in other databases. They are a good solution in the scenarios where you need to implement database sharding without the need to create a custom application code or directly load from remote data sources.

There are two ways of configuring a linked server in SSMS. One way is by using sp_addlinkedserver system stored procedure and another is by using SQL Server Management Studio (SSMS) GUI interface.

Q14. What are the different authentication modes in SQL Server?

SQL Server supports two authentication modes, Windows authentication mode, and mixed-mode.

Windows Authentication Mode: 

Windows authentication is the default and is often referred to as integrated security because this SQL Server security model is tightly integrated with Windows. With Windows authentication, users are already logged onto Windows and do not have to log on separately to SQL Server.

Specific Windows user and group accounts are trusted to log in to SQL Server. Windows users who have already been authenticated do not have to present additional credentials. The following SqlConnection.ConnectionString specifies Windows authentication without requiring users to provide a user name or password.

"Server=MSSQL1;Database=AdventureWorks;Integrated Security=true;"

Mixed Authentication Mode: 

The mixed-mode supports authentication both by Windows and by SQL Server. User name and password pairs are maintained within SQL Server. If you must use mixed-mode authentication, you must create SQL Server logins, which are stored in SQL Server. You then have to supply the SQL Server user name and password at run time.

Microsoft recommends using Windows authentication wherever possible. Windows authentication uses a series of encrypted messages to authenticate users in SQL Server. When SQL Server logins are used, SQL Server login names and encrypted passwords are passed across the network, which makes them less secure.

Database logins are distinct from database users. You must map logins or Windows groups to database users or roles in a separate operation. You then grant permissions to users or roles to access database objects.

What are the different SQL Login Types?

Different Login Types: SQL Server supports three types of logins:

Windows Group Login: Granting access to a Windows group grants access to all Windows user logins that are members of the group.

Local Windows User Login: A local Windows user account or trusted domain account. SQL Server relies on Windows to authenticate the Windows user accounts. 

SQL Server Login: SQL Server stores both the username and a hash of the password in the master database, by using internal authentication methods to verify login attempts.

SQL Server installs with a SQL Server login named "sa". The "sa" login maps to the sysadmin fixed server role, which has irrevocable administrative credentials on the whole server.  

All members of the Windows BUILTIN\Administrators group are members of the sysadmin role by default but can be removed from that role.

Q15. What is Collation in SQL? How does it affect the database?

A collation specifies the bit patterns that represent each character in a dataset. Collations also determine the rules that sort and compare data. SQL Server supports storing objects that have different collations in a single database. 

For non-Unicode columns, the collation setting specifies the code page for the data and which characters can be represented. The data that you move between non-Unicode columns must be converted from the source code page to the destination code page.

Collations in SQL Server provide sorting rules, case, and accent sensitivity properties for your data. Collations that are used with character data types, such as char and varchar, dictate the code page and corresponding characters that can be represented for that data type.

When you select a collation for your server, database, column, or expression, you're assigning certain characteristics to your data. These characteristics affect the results of many operations in the database. 

Whether you're installing a new instance of SQL Server, restoring a database backup, or connecting the server to client databases, it's important to understand the locale requirements, sorting order, and case and accent sensitivity of the data that you're working with. 

Transact-SQL statement results can vary when the statement is run in the context of different databases that have different collation settings. If you must work with objects that have different collation and code page settings, code your queries to consider the rules of collation precedence. 

Different Collation Options:

The options associated with a collation are case sensitivity (CS), accent sensitivity(AS), kana sensitivity(KS), width sensitivity(WS), and variation-selector sensitivity(VSS). You can specify these options by appending them to the collation name. 

For example the collation Japanese_Bushu_Kakusu_100_CS_AS_KS_WS_UTF8 is case-sensitive, accent-sensitive, kana-sensitive, width-sensitive, and UTF-8 encoded. 

Case-sensitive (_CS): Distinguishes between uppercase and lowercase letters. If this option is selected, lowercase letters sort ahead of their uppercase versions. If this option isn't selected, the collation is case-insensitive. So SQL Server considers the uppercase and lowercase versions of letters to be identical for sorting purposes.

Accent-sensitive (_AS): Distinguishes between accented and unaccented characters. If this option isn't selected, the collation is accent-insensitive. SQL Server considers the accented and unaccented versions of letters to be identical for sorting purposes. 

Kana-sensitive(_KS): Distinguishes between the two types of Japanese kana characters: Hiragana and Katakana. If this option isn't selected, the collation is kana-insensitive. So SQL Server considers Hiragana and Katakana characters to be equal for sorting purposes.

Width-sensitive (_WS): Distinguishes between full-width and half-width characters. If this option isn't selected, SQL Server considers the full-width and half-width representation of the same character to be identical for sorting purposes.

Variation-selector-sensitive (_VSS): Distinguishes between various ideographic variation selectors in the Japanese collations Japanese_Bushu_Kakusu_140 and Japanese_XJIS_140, which are introduced in SQL Server 2017.

UTF-8 (_UTF8): Enables UTF-8 encoded data to be stored in SQL Server. If this option isn't selected, SQL Server uses the default non-Unicode encoding format for the applicable data types. 

Setting collations are supported at the different levels of an instance of SQL Server: Server Level, Database Level, Column Level, TSQL Expression Level

Please watch how the collation affects the data storage and SQL logic by the LEN vs Datalength video here

Q16. What is the use of Statistics? Why the updation of statistics are important in SQL?

Statistics store information about the data distribution of the column value(s) in your tables as well as the total number of rows. For indexes, it stores the distribution of key values.

A histogram is created for the first table column in each statistics object which stores this data distribution or cardinality. For multi-column statistics objects, information is also stored on the correlation of the values within the object. A density vector is also created for each column and this stores information about the number of duplicate values for each column.

Existing statistics are used by the query optimizer in order to generate the most efficient query plan for execution. The query optimizer uses the statistics to determine when an index should be used, how to access those indexes, how best to join tables, etc.

Statistics are created in a couple of different ways. The first way is when an index is created on a table column(s). When this happens the Microsoft SQL Server database automatically creates a statistics object for the column(s) that make up the index. 

The next way also happens automatically if you have the "AUTO CREATE STATISTICS" database option set. This can be checked by running the following T-SQL command:


You can check this by the below command:

SELECT name,is_auto_create_stats_on FROM sys.databases WHERE name = 'TechPointFundamentals';  

The last way is to create the statistics manually. SQL Server will only create single-column statistics and if you require anything other than single-column statistics these have to be created manually using the CREATE STATISTICS command. 

CREATE STATISTICS STATS_Employee_Name on tblEmployee (EmployeeName);

Statistics in SQL Server are stored in binary large objects (BLOBs) and can be accessed using the following system catalog views and DMVs: sys.stats, sys.stats_columns, sys.dm_db_stats_properties or sys.dm_db_stats_histogram

A much easier way to view object statistics is to use the DBCC SHOW_STATISTICS command: DBCC SHOW_STATISTICS ("tblEmployee",EmployeeName)


Statistics can be updated automatically and manually as well. To update statistics manually we can use the UPDATE STATISTICS command or the built-in stored procedure sp_updatestats

sp_updatestats will update the statistics for every user-defined table in the database it is run against. The UPDATE STATISTICS command gives you the ability to use the same sampling options that were outlined above with the CREATE STATISTICS command. 

The statistics can be updated is automatically when the "AUTO UPDATE STATISTICS" database option is set. If this option is set SQL Server will check how many rows have been modified in the table and if a certain threshold is exceeded it will update the statistics and create a new execution plan for the plan cache. 

When statistics are updated, queries in the plan cache referencing these objects will recompile the next time they are called so you do have to be careful not to update statistics too often as the performance impact of frequently recompiling queries can hurt your overall system performance.

To Be Continued Part-03...

Recommended Articles

Thanks for visiting this page. Please follow and join us on LinkedInFacebookTelegramQuoraYouTubeTwitterPinterestTumbler, and VK for regular updates.


No comments:

Post a Comment

Please do not enter any HTML. JavaScript or spam link in the comment box.