Migrating Servers to Azure – Activating Windows

Recently I have migrated several servers from Vultr to Microsoft Azure.

As part of this migration, windows became un-activated – most likely because Vultr runs it’s own KMS service and it is no longer able to contact the server:

In order to resolve this, I changed the product key to Microsoft’s KMS key.

Then the KMS server was changed to Microsoft’s, and activation was forced: (run from administrative command prompt)

cscript c:\windows\system32\slmgr.vbs /ipk <product key>
cscript c:\windows\system32\slmgr.vbs /skms kms.core.windows.net:1688
cscript c:\windows\system32\slmgr.vbs /ato

SQL Server – Identify Unused Indexes

Within SQL Server an index is used to optimise the reading of data from the database.

However the side effect of having such an index is reduced write speed for inserts and updates. This is because SQL Server will need to update and maintain the index.

Running the below SQL will highlight indexes which have not been used recently, allowing you to remove these indexes (if these are no longer required). This is sorted by the number of updates, as tables with high number of updates will most likely benefit from removal of this index.

SELECT TOP 200 o.NAME AS ObjectName
	,i.NAME AS IndexName
	,i.index_id AS IndexID
	,dm_ius.user_seeks AS UserSeek
	,dm_ius.user_scans AS UserScans
	,dm_ius.user_lookups AS UserLookups
	,dm_ius.user_updates AS UserUpdates
	,p.TableRows
	,'DROP INDEX ' + QUOTENAME(i.NAME) + ' ON ' + QUOTENAME(s.NAME) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats dm_ius
INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id
	AND dm_ius.OBJECT_ID = i.OBJECT_ID
INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN (
	SELECT SUM(p.rows) TableRows
		,p.index_id
		,p.OBJECT_ID
	FROM sys.partitions p
	GROUP BY p.index_id
		,p.OBJECT_ID
	) p ON p.index_id = dm_ius.index_id
	AND dm_ius.OBJECT_ID = p.OBJECT_ID
WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID, 'IsUserTable') = 1
	AND dm_ius.database_id = DB_ID()
	AND i.type_desc = 'nonclustered'
	AND i.is_primary_key = 0
	AND i.is_unique_constraint = 0
	AND dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups = 0
ORDER BY UserUpdates DESC

The following line can also be altered in order to include indexes which are rarely used (The code above is for unused), as there may be some indexes which are expensive to maintain, but only used once a day for example!

AND dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups = 0

Example of result from this is shown below:

Windows 10 – Missing Taskbar Icons

Had a very odd issue with Windows 10 and taskbar menu items missing:

After lots of research, I’ve partially fixed this by running the following from an elevated command prompt:

for /R "%APPDATA%\Microsoft\Windows\Start Menu\Programs\" %f in (.lnk) do copy /b "%f"+,, "%f" 1>nul
for /R "%ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs\" %f in (
.lnk) do copy /b "%f"+,, "%f" 1>nul
taskkill /IM explorer.exe /F
explorer.exe

What this does is:

  • Update “Last Update Date” attribute on Start Menu Links
  • Kill Explorer process
  • Start Explorer process

The result being:

The one missing icon is for Microsoft Edge (also other Microsoft Store Apps have missing icons too, such as Calculator etc).

After lots of research, investigation and trial-and-error, I solved the problem by uninstalling “Google Drive File Stream” (plus I dont really use this anyways):

Tada Fixed:

Taskcentre – Recording Success / Failure using “Call Task” Tool

The “Call Task” tool in Taskcentre is a useful tool in calling other tasks.

The task can run in two modes:

  • Synchronous
  • Queue


Queue
This adds the task to the queue, to be scheduled at a future point in time

Synchronous
This queues the task and awaits a response.

Sometimes in Taskcentre you wish to know if a task you have called has failed. This is useful in cases where there’s a queue of objects which require some work completing and you are processing these in a batch. Examples include:

  • Invoices to send
  • Sales Orders to process
  • Records require uploading to website
  • etc

The issue with calling tasks Synchronously is that you cannot easily tell if the task being run has failed or not, however by analysing the output of the task, you can record this information and make decisions accordingly.

Here’s the output of calling this task:

<DataOutput>
	<OutputData>
		<SupplementaryReference />
		<TaskRunStatus>2</TaskRunStatus>
		<Folder>Tasks\Test\Looping options\</Folder>
		<TaskID>383</TaskID>
		<fileName>PO128.doc</fileName>
		<filePath>C:\Temp\File_Test</filePath>
	</OutputData>
</DataOutput>

The “TaskRunStatus” flag is the important part here. If the value is “2”, then the task completed successfully. Value “4” is a failure.

Using the “XML To RecordSet” tool, each results from the “Call Task” can be mapped to a Recordset:

From there, we can then use the “VBScript” tool to loop around all the RecordSet entries:

Here’s the VBScript:

If ThisStep.RecordSource("OutputData_TaskRunStatus") = 2 Then
  Variables("SuccessCount") = Variables("SuccessCount") + 1
Else
  Variables("FailCount") = Variables("FailCount") + 1
End If

From there, you can use these variables within your other steps. For example, here’s a HTML document which is posted to HipChat using code here.

Variables("Message") = "<B>Task Completion</B>"
Variables("Message") = Variables("Message") & "<BR/>Success: " & Variables("SuccessCount")
Variables("Message") = Variables("Message") & "<BR/>Fail: " & Variables("FailCount")

Using Dynamic Management Views to Recommend Indexes in SQL Server

Did you know that SQL Server provides several views which provide information on recommended indexes?

Using the following SQL, you can identify where SQL recommends indexes, ordered by potential performance boost from adding the suggested index.

Obviously you may wish to review these manually, rather than just blindly creating, but it gives you a pointer in the right direction!

It is also worth noting that this view is cleared when SQL Service is restarted, therefore it’s a good idea to restart the service, let the system run for a couple of hours or so, then grab the results and analyse further.

The code below works for all versions of SQL server after 2008, plus also Azure SQL Databases.

SELECT   migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure
	,  'CREATE INDEX [missing_index_' + CONVERT(VARCHAR, mig.index_group_handle) + '_' + CONVERT(VARCHAR, mid.index_handle)   + '_' + LEFT(PARSENAME(mid.statement, 1), 32) + ']'   + ' ON ' + mid.statement   + ' (' + ISNULL(mid.equality_columns, '')     + CASE 
		WHEN mid.equality_columns IS NOT NULL
			AND mid.inequality_columns IS NOT NULL
			THEN ','
		ELSE ''
		END     + ISNULL(mid.inequality_columns, '')   + ')'   + ISNULL(' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement
	,  migs.*
	,mid.database_id
	,mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

Example results:

Azure SQL Database View Running Transactions / Blocking

With the Azure SQL Database, sometimes you want to view the current running transactions. Unfortunately there is no GUI as such like there is with normal SQL Server (right click, “Activity Monitor”), however it can be grabbed by executing sys.dm_exec_requests.

The following SQL is very useful as it shows details of all running transactions within the database, including highlighting if the transaction is blocked, or is causing blocking.

SELECT r.session_id
	,CASE 
		WHEN r.session_id IN (
				SELECT DISTINCT (blocking_session_id)
				FROM sys.dm_exec_requests
				)
			THEN 'Yes'
		ELSE ''
		END AS blocking
	,r.blocking_session_id
	,r.request_id
	,r.start_time
	,r.STATUS
	,r.command
	,r.database_id
	,r.user_id
	,r.wait_type
	,r.wait_time
	,r.last_wait_type
	,r.wait_resource
	,r.total_elapsed_time
	,r.cpu_time
	,CASE r.transaction_isolation_level
		WHEN 0
			THEN 'Unspecified'
		WHEN 1
			THEN 'ReadUncommitted'
		WHEN 2
			THEN 'ReadCommitted'
		WHEN 3
			THEN 'Repeatable'
		WHEN 4
			THEN 'Serializable'
		WHEN 5
			THEN 'Snapshot'
		END AS transaction_isolation_level
	,r.row_count
	,r.percent_complete
	,st.TEXT AS sql
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
GROUP BY r.session_id
	,r.blocking_session_id
	,r.request_id
	,r.start_time
	,r.STATUS
	,r.command
	,r.database_id
	,r.user_id
	,r.wait_type
	,r.wait_time
	,r.last_wait_type
	,r.wait_resource
	,r.total_elapsed_time
	,r.cpu_time
	,r.transaction_isolation_level
	,r.row_count
	,r.percent_complete
	,st.TEXT
ORDER BY r.total_elapsed_time

Azure SQL Database Index Size

Recently I’ve been doing some work on optimising indexes within Azure SQL Database. The following SQL from Daniel is very useful in showing the status of all the current indexes within a SQL Database. The SQL works for both SQL Server 2014 onwards, plus Azure SQL Database. I have not tested it on any other versions.

This code returns the following:

  • Object Type
  • Object Name
  • Index Name
  • Index Type
  • Partition (if any)
  • Compression
  • Data Space
  • Fill Factor
  • Rows
  • Reserved MB
  • In Row Used MB
  • Row Overflow Used MB
  • Out of Row Used MB
  • Total Used MB
SELECT --- Schema, type and name of object and index:
	       REPLACE(obj.type_desc, '_', ' ') AS objectType
	,       sch.[name] + '.' + obj.[name] AS objectName
	,       ISNULL(ix.[name], '') AS indexName
	,       ix.type_desc AS indexType
	,       --- Partition number, if there are partitions:
	      (CASE COUNT(*) OVER (
				PARTITION BY ps.[object_id]
				,ps.index_id
				)             
			WHEN 1
				THEN ''             
			ELSE CAST(ps.partition_number AS VARCHAR(10))             
			END) AS [partition]
	,       --- Storage properties:
	       p.data_compression_desc AS [compression]
	,       ds.[name] + ISNULL('(' + pc.[name] + ')', '') AS dataSpace
	,       STR(ISNULL(NULLIF(ix.fill_factor, 0), 100), 4, 0) + '%' AS [fillFactor]
	,       --- The raw numbers:
	       ps.row_count AS [rows]
	,       STR(1.0 * ps.reserved_page_count * 8 / 1024, 12, 2) AS reserved_MB
	,       STR(1.0 * ps.in_row_used_page_count * 8 / 1024, 12, 2) AS inRowUsed_MB
	,       STR(1.0 * ps.row_overflow_used_page_count * 8 / 1024, 12, 2) AS RowOverflowUsed_MB
	,       STR(1.0 * ps.lob_used_page_count * 8 / 1024, 12, 2) AS outOfRowUsed_MB
	,       STR(1.0 * ps.used_page_count * 8 / 1024, 12, 2) AS totalUsed_MB
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.partitions AS p ON     ps.[partition_id] = p.[partition_id]
INNER JOIN sys.objects AS obj ON     ps.[object_id] = obj.[object_id]
INNER JOIN sys.schemas AS sch ON     obj.[schema_id] = sch.[schema_id]
LEFT JOIN sys.indexes AS ix ON     ps.[object_id] = ix.[object_id]
	AND     ps.index_id = ix.index_id
--- Data space is either a file group or a partition function:
LEFT JOIN sys.data_spaces AS ds ON     ix.data_space_id = ds.data_space_id
--- This is the partitioning column:
LEFT JOIN sys.index_columns AS ixc ON     ix.[object_id] = ixc.[object_id]
	AND     ix.index_id = ixc.index_id
	AND     ixc.partition_ordinal > 0
LEFT JOIN sys.columns AS pc ON     pc.[object_id] = obj.[object_id]
	AND     pc.column_id = ixc.column_id
--- Not interested in system tables and internal tables:
WHERE obj.[type] NOT IN (
		'S'
		,'IT'
		)
ORDER BY sch.[name]
	,obj.[name]
	,ix.index_id
	,p.partition_number;

Example Results:

Run Programs as a Domain User from None Domain Account

Following on from my blog post for running Microsoft Dynamics Nav as a different user here, I have done some further investigation of the runas command.

Typically the RunAs command is used for local authentication, however there is a little known switch which bypasses local authentication and uses it for network authentication only.

This switch allows you to run the Microsoft Dynamics NAV Client, Development Environment, or any other tools as a domain user, from a none domain user pc.

Example syntax:

runas.exe /netonly /user:<DOMAIN>\<USER> "C:\Program Files (x86)\Microsoft Dynamics NAV\71\RoleTailored Client\finsql.exe"

For ease of access, it’s also possible to update your shortcuts as appropriate:

Firstly, within the shortcut, amend the target by prefixing with:

runas.exe /netonly /user:<DOMAIN>\<USER>

Also ensure you “Change Icon”, otherwise you will update the icon to a square (not good for users):

After running the application, it now asks you for the credentials, then runs the application as normal:

Delete Duplicate Records – Leave Only 1 Behind MS SQL

Recently I’ve had to delete duplicate records from the Record Link table in Microsoft Dynamics Nav.

Using the following SQL, (which uses the OVER clause for partitioning) this deletes all duplicate records, leaving the first unique item behind.

The SQL can be updated easily to function for other tables / scenarios. Currently it uses the following fields to identify duplicates:

  • Description
  • URL 1

The SQL does not look at the Notes, which is stored as a BLOB.

DELETE
FROM [Record Link]
WHERE [Link ID] IN (
		SELECT [Link ID]
		FROM (
			SELECT *
				,ROW_NUMBER() OVER (
					PARTITION BY [Record ID]
					,[Description]
					,[URL1] ORDER BY [Link ID]
					) AS [ItemNumber]
			FROM [Record Link]
			WHERE [Note] IS NULL
			) a
		WHERE ItemNumber > 1
		)

Suggesting MS SQL Index Creations Using sys.dm_db_missing_index_details

Within SQL Server, sys.dm_db_missing_index_details returns indexes which it believes are required by the Query Optimiser. Basically as queries are run in the background, the Query Optimiser makes a record of any optimisations it feels are necessary. Restarting the SQL server resets all these stats.

Using some SQL it is possible to identify potentially missing indexes, create sample SQL to build these indexes. I’ve found the following SQL useful:

SELECT mid.statement
	,migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure
	,'CREATE INDEX [missing_index_' + CONVERT(VARCHAR, mig.index_group_handle) + '_' + CONVERT(VARCHAR, mid.index_handle) + '_' + LEFT(PARSENAME(mid.statement, 1), 32) + ']' + ' ON ' + mid.statement + ' (' + ISNULL(mid.equality_columns, '') + CASE 
		WHEN mid.equality_columns IS NOT NULL
			AND mid.inequality_columns IS NOT NULL
			THEN ','
		ELSE ''
		END + ISNULL(mid.inequality_columns, '') + ')' + ISNULL(' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement
	,migs.*
	,mid.database_id
	,mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

This produces results as follows:

Columns include:

  • statement – The table affected
  • improvement_measure – A generated measure of improvement (based on user cost, impact, seeks etc)
  • create_index_statement – A cut and paste friendly statement to add the index
  • group_handle – Identifies a group of missing indexes. This identifier is unique across the server.
  • unique_compiles – Number of compilations and recompilations that would benefit from this missing index group
  • user_seeks – Number of seeks caused by user queries that the recommended index in the group could have been used for
  • user_scans – Number of scans caused by user queries that the recommended index in the group could have been used for.
  • last_user_seek – Date and time of last seek caused by user queries that the recommended index in the group could have been used for.
  • last_user_scan – Date and time of last scan caused by user queries that the recommended index in the group could have been used for.
  • avg_total_user_cost – Average cost of the user queries that could be reduced by the index in the group.
  • avg_user_impact – Average percentage benefit that user queries could experience if this missing index group was implemented.
  • system_seeks – Number of seeks caused by system queries, such as auto stats queries, that the recommended index in the group could have been used for.
  • last_system_seek – Date and time of last system seek caused by system queries that the recommended index in the group could have been used for.
  • last_system_scan – Date and time of last system scan caused by system queries that the recommended index in the group could have been used for.
  • avg_total_system_cost – Average cost of the system queries that could be reduced by the index in the group.
  • avg_system_impact – Average percentage benefit that system queries could experience if this missing index group was implemented.
  • database_id – The ID number of the database
  • object_id – The ID number of the object