Sunday, February 7, 2021

Monitoring Analysis Server (SSAS) using Spotlight (Quest Software)

 This post is just answering two simple questions:

1. Can Quest Software's Spotlight successfully monitor SQL Server Analysis Server?

2. If it can, what SSAS parameters, databases' and cubes' details it monitors and provides information about?


To answer these question I'd have to download, install and configure Spotlight software.

I've downloaded the trial version from official Quest Software web site: 

https://www.quest.com/products/spotlight-on-sql-server-enterprise/


So, the answer for the first question is "Yes".

The newest 13.3 version of the application monitors not only Windows box and SQL Server, but  Analysis Services as well!


After fully configured on my local machine it looks like this


I've configured it to monitor my local SQL Server, Windows box and Analysis Services Server.
I'm not really interested in Windows or SQL Server and click to see "Analysis Services"

It is common Quest Software design and I do not have to learn on how to use it.
The screen provides following metric categories:
  • Connections
  • Memory
  • Storage
  • CPU
  • Server
When I click on some metrics it opens a menu, to see the details
For instance when I click on connections I get following drop down menu:

When I click on "Eye" sign I can see the list of current connections with all their parameters:

From there I can easily switch tab to current "Sessions"

Or just to list of most recent commands. If I select a command I can see the most recent query for a connection:

Memory section allows me to see general memory usage, caching metrics and threads' stats:

Storage section provides me information about my SSAS databases.
In this case I have only one AdventureWorks DB:

And from that screen I can easily switch to "Cubes"

CPU information is reporting only general box metrics of the processor like CPU Utilization, number of "Interrupts", "Kilobytes Transferred" and "Server Work Queues":

When I switched to "Processing" tab I could see other box metrics, such as Queue Length, Threads and Processes count, and Context Switching:

The "Server" section is least informative. It provides me only my server version and edition and status of Disk and Memory:


Besides of all these semi-static data there are supposed to be provided immediate data from my SSAS:
  • Queries Answered/s
  • Queries Requested/s
  • Queries From Cache Direct/s
  • Queries From File/s
  • Temp File Rows Written/s
  • Rows Read/s
  • Rows Written/s
  • Direct Hit Ratio
  • Current Latch waits
  • Current Lock waits
  • Query Pool Job Queue Length
  • Etc.

Unfortunately I have only one databases and couldn't generate a significant activity to rise those numbers above zero level.


Hope my info helped you to answer similar questions as mine.

And stay tuned, I plan to hit other SSAS monitoring tools and go beyond of it.

Monday, July 27, 2020

SSRS. Use of Lookup type functions.

There is a lot of blog posts in the Internet about how to use Lookup and other functions of this type in SSRS.
It is very straight forward and easy to use.
However, I've managed to make a mistake and I assume have the same problem if you got to that page.

The Error

At firs, here is the error I've got when I tried to use Lookup function in SSRS:
The expression used for the calculated field 'ProductName' includes an aggregate, RowNumber, RunningValue, Previous or lookup function. Aggregate, RowNumber, RunningValue, Previous and lookup functions cannot be used in calculated field expressions.

How did I get there?

At first, I've created two data sets "WorkOrders" and "Products"

Then I wanted to include "Product Name" into my "Work Order Report"
Here is How I've done it:

The WRONG WAY

(that is how it shouldn't been done)
Obviously, if I use a function I thought I have to use it in a calculated field

But That is Wrong

You should do it the right way:

The RIGHT WAY

That is not obvious, but you have to use "Query Field" to use the Lookup function:

Then you just have to specify New Field's name and insert a formula.
In my case I used following:

=Lookup(Fields!ProductID.Value, Fields!ProductID.Value, Fields!Name.Value, "Products")


That worked very well for me and I hope you will struggle less by reading this.
Good Luck


Wednesday, February 12, 2020

Use of Uniqueidentifier in Persisted calculated column.

This post is for you in case you decide to use Uniqueidentifier column in your table and then you think about including it into a Persisted calculated column.
You also might see it useful if you like weird or funny SQL Server behavior.

At first, here is the SQL Server version I have that problem on.
After a while Microsoft might fix that bug.
Microsoft SQL Server 2016 (SP2-CU11) (KB4527378) - 13.0.5598.27 (X64) 

To demonstrate the problem in the most simple way you can run following script:
DROP TABLE IF EXISTS #tbl_Error_Test;
GO
CREATE TABLE #tbl_Error_Test(
       GuidOne UNIQUEIDENTIFIER,
       GuidTwo as CAST(GuidOne as CHAR(36)) PERSISTED
  );
GO
INSERT INTO #tbl_Error_Test(GuidOne) VALUES (NewID()), (NewID());
GO
SELECT * FROM #tbl_Error_Test;

GO

It will return something like this:

As you can see, columns GuidOne and GuidTwo are different.
Moreover, if you run following command you will get very unpleasant error:
DBCC CHECKTABLE('#tbl_Error_Test') WITH NO_INFOMSGS, EXTENDED_LOGICAL_CHECKS;

Msg 2537, Level 16, State 106, Line 152
Table error: object ID -1485216446, index ID 0, partition ID 6052840780930088960, alloc unit ID 2738196559872000000 (type In-row data), page (4:1079768), row 0. The record check (valid computed column) failed. The values are 2 and 0.
Msg 2537, Level 16, State 106, Line 152
Table error: object ID -1485216446, index ID 0, partition ID 6052840780930088960, alloc unit ID 2738196559872000000 (type In-row data), page (4:1079768), row 1. The record check (valid computed column) failed. The values are 2 and 0.
CHECKTABLE found 0 allocation errors and 2 consistency errors in table '#tbl_Error_Test_____________________________________________________________________________________________________000000001926' (object ID -1485216446).

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE (tempdb.dbo.#tbl_Error_Test_____________________________________________________________________________________________________000000001926).

If you try to check what the correct value should be, it won't make any good:
SELECT GuidOne, GuidThree = CAST(GuidOne as CHAR(36))
FROM #tbl_Error_Test;


The SQL Server is smart enough to recognize same function and instead of calculating the value it will return you the value stored in the table.

You can only do a trick if you change the function and convert Guid to 37 chars, instead of 36:
SELECT GuidOne, GuidFour = CAST(GuidOne as CHAR(37))

FROM #tbl_Error_Test;

Now you got the right result, but how to fix it in the table?

It is not easy, but possible. You just have to replace Guid column by itself, but from another table:
UPDATE G1 SET GuidOne = G2.GuidOne
FROM #tbl_Error_Test as G1
INNER JOIN #tbl_Error_Test as G2
       ON G1.GuidOne = G2.GuidOne;
GO
SELECT * FROM #tbl_Error_Test;

GO


It is fixed now and it also fixes DBCC CHECKDB error.

If you see the same behavior on your SQL Server you can vote for that bug to be fixed at Microsoft site:
https://feedback.azure.com/forums/908035-sql-server/suggestions/39694663-use-of-guid-column-in-persisted-calculated-column

Friday, December 20, 2019

Recovering from Severe Database corruption error 9100

In my previous post I outlined very dangerous SQL Server problem, caused by usually not very harmful commands "PARSE" and "TRY_PARSE": Having "NaN" value for REAL and FLOAT producing severe error.

This post will be about a major issue it causes and on how to fight it.

Will start from generating the problem.

Attention: Do not run that in production!!!
Here is a script to generate the issue:
DROP TABLE IF EXISTS tbl_Real_Test;
GO
CREATE TABLE tbl_Real_Test (ID UNIQUEIDENTIFIER, R REAL, F FLOAT);
GO
INSERT INTO tbl_Real_Test (ID, R, F)
SELECT NewID(), TRY_PARSE(r as REAL), TRY_PARSE(f as FLOAT) 
FROM (VALUES
       ('0','434534776544')
       ,('0.0000323', 'NaN')
       ,('3.3881317890172E+17','')
       ,('','000000000000x000000000000')
       ,('NULL','ABCDEFG')
       ,('Null','--------------------')
       ,('null','!!!!!!!!!!!!!!!!!!!!')
       ,('ABC','345435467655665676545')
       ,('NaN','5.3881317890172E+47')
       ,('Nan','0.0000000000345')
       ,('nan','434.34543543')
) as R(r,f);
GO
SELECT * FROM tbl_Real_Test
WHERE R Is not Null ;

GO

That script will generate following error:
Msg 9100, Level 23, State 2, Line 57
Possible index corruption detected. Run DBCC CHECKDB.

Troubleshooting:

You might try to run "DBCC CHECKDB" against that database like this:
DBCC CHECKDB('Slava_Temp_Test');

DBCC CHECKDB resulted couple of errors and besides of that it made an error Dump into an ERRORLOG file and created four other DUMP files:
Msg 2570, Level 16, State 3, Line 62
Page (1:16200), slot 1 in object ID 2002106173, index ID 0, partition ID 72057594042843136, alloc unit ID 72057594048479232 (type "In-row data"). Column "F" value is out of range for data type "float".  Update column to a legal value.
Msg 2570, Level 16, State 3, Line 62
Page (1:16200), slot 8 in object ID 2002106173, index ID 0, partition ID 72057594042843136, alloc unit ID 72057594048479232 (type "In-row data"). Column "R" value is out of range for data type "real".  Update column to a legal value.
DBCC results for 'tbl_Real_Test'.
There are 9 rows in 1 pages for object "tbl_Real_Test".

CHECKDB found 0 allocation errors and 2 consistency errors in table 'tbl_Real_Test' (object ID 2002106173).

Fixing attempt:

To fix the issue you might try following commands
DBCC CHECKDB ('Slava_Temp_Test' , REPAIR_REBUILD) WITH ALL_ERRORMSGS;
GO
DBCC CHECKDB ('Slava_Temp_Test' , REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS;

GO

However, in order to perform the fix DBCC CHECKDB requires the database to be in a single user mode:
Msg 7919, Level 16, State 3, Line 141
Repair statement not processed. Database needs to be in single user mode.

The problem is that it might be not possible to switch database into a single user mode, because it is part of an Availability Group and experiencing very heavy user activity. The use of conventional SQL Server tool would cause disruption in Production for who knows for how much time.

Solution:

To solve that problem we do not have to switch database in single mode or restore it from a previous backup to eliminate the issue. You just run an update script over your problematic table and columns to replace bogus values, which SQL Server can't interpret correctly, by NULLs:
UPDATE tbl_Real_Test
SET R = CASE CAST(R as BINARY(4))
       WHEN CAST( PARSE('NaN' as REAL) as BINARY(4))
       THEN Null Else R END
       , F = CASE CAST(F as BINARY(8))
       WHEN CAST( PARSE('NaN' as FLOAT) as BINARY(8))
       THEN Null Else F END;
GO
SELECT * FROM tbl_Real_Test
WHERE R Is not Null OR F Is not Null;

GO

The result of this query should be like this:

Disclaimer and Advertisement:

There are probably a lot of other cases, which might cause "Database corruption error 9100" and if that particular solution does not apply to your environment you can contact me for the resolution.


Tuesday, December 17, 2019

Having "NaN" value for REAL and FLOAT producing severe error.

Attention: DO NOT RUN THIS IN PRODUCTION!!!

That blog post is about the real error in SQL Server.
It was reproduced on SQL Server 2016 CU10 and on SQL Server 2019 RTM.

Database in trouble has a table with FLOAT column. It's Front-End application verifies user's input and inserts the data into that column using TRY_PARSE function.
The developer's intention was that any "Not-a-Numeric" or "Out-of-Range" values will be automatically converted to NULL and it will be for user's discretion to verify and fix these values.

However, one of the application users was very educated and instead of empty space, NULL or any other bad not numeric value the user supplied data with value of "NaN" for empty cells, which simply stands for "Not a Numeric".
That action caused a database corruption!

Here is how to reproduce it.

At first will create a table:
USE tempdb;
GO
DROP TABLE IF EXISTS tbl_Real_Float_Test;
GO
CREATE TABLE tbl_Real_Float_Test (ID INT IDENTITY(1,1), R REAL, F FLOAT);
GO

Will insert some bad data:
INSERT INTO tbl_Real_Float_Test(R,F)
SELECT TRY_PARSE('0' as REAL), TRY_PARSE('NaN' as FLOAT);
GO
INSERT INTO tbl_Real_Float_Test(R,F)
SELECT TRY_PARSE('NaN' as REAL), TRY_PARSE('0' as FLOAT);
GO

Now will try to query it:
SELECT R FROM tbl_Real_Float_Test WHERE ID = 1;
GO
SELECT F FROM tbl_Real_Float_Test WHERE ID = 2;
GO
SELECT ID FROM tbl_Real_Float_Test
WHERE R = 0 OR F = 0;
GO

First two queries will return zeroes. The third one will produce an error:
It also will write that error into the ERRORLOG:
Error: 9100, Severity: 23, State: 2. 
Possible index corruption detected. Run DBCC CHECKDB.

For the easiest reproduction of the destructive behavior of "NaN" you can run following statement:
DECLARE @r REAL = PARSE('NaN' as REAL);
PRINT 'We can Assign "NaN" to a variable, but we cannot see it:';
SELECT @r;

You will get an error of:
An error occurred while executing batch. Error message is: Arithmetic Overflow.

Interestingly enough the TRY_PARSE command allows "Culture" specification, which, in some cases, works very well:
DECLARE @Russian REAL = TRY_PARSE('NaN' as REAL USING 'Ru-RU');
DECLARE @US_Real REAL = TRY_PARSE('NaN' as REAL USING 'en-US');
SELECT Russian_Real = @Russian;
SELECT US_Real = @US_Real;

As you can see, the Russian culture treats the TRY_PARSE command correctly and produces "NULL" instead of the error. Besides of Russian it also works for Finnish, Swedish, Arabic, Traditional Chinese and undocumented "sr-Latn-CS".
No any other cultures treat "NaN" as it expected.

If you can reproduce that bug you can help Microsoft to fix it by voting for that bug:  https://feedback.azure.com/forums/908035-sql-server/suggestions/39278515-try-parse-and-parse-produce-an-error-converting-n

Again: DO NOT RUN IT IN PRODUCTION and do not forget to drop your test table!