Jan 082014
 

On mssqltips.com you can find my post Exporting CLR Assemblies from SQL Server back to .dll files.

This post describes how you can export the assemblies from SQL server back to physical .dll files. This can be very useful in cases when you do not have the source code for he assemblies inside the DB and you would like to do a security review of the assembly. This commonly happens if you receive the assembly as BYTE code in T-SQL.

Jun 272012
 

Recently I was going through the archives of posts by Kalen Delaney blog and I came across very interesting post Did you know? — Altering the length of a fixed-length column.  This article gives information how SQL Server is wasting space when you alter a fixed length column and increase its length.

It is interesting so I wanted to take a closer look on this and take a look on the physical db pages to see what happens and to see how the data are stored after alter is done.

Test data preparation and initial view of the data

So first let’s prepare a testing table with some testing data.

Now if we take a look on the columns information in metadata tables using Kalen’s query to sys.system_internal_partition_columns we can see the offsets of the table columns.

As we can see, the physical order is the order in which the columns were defined using the CREATE TABLE statement.

Once we have the data in the table, let’s take a look on how the data are stored. The below query will work only on SQL Server 2008+ and is using undocumented virtual column %%physloc%% which provides information about rows physical location in database and undocumented function sys.fn_PhysLocCracker, which cracks the physical location to human readable FileID, PageID and SlotID.

Let’s take a look on the first physical page 168.

The partial results are below

We can see, that the physical storage corresponds to the metadata stored in the system table sys.system_internal_partition_columns.

Altering the column length and analyzing impacts

Now let’s alter the table and increase the length of Col1 to 3000 characters and take a look what happens.

If we take a look on the sys.system_iternal_partition_columns we will see following:

We can see, that the offset of the Col1 has changed and the Column was moved to the end of the record. From here we can see, that the original 2000 bytes were wasted. If we take a look on the physical page 168 as above, we will see, that there is no change in the physical page as the this change to the column is metadata change only.

Impact on new records

So let’s take a look what impact this have on new records added to the table.

From the results we can see, that the first 4 records originally inserted are occupied only two pages as two records were stored per database page. After the update we can see that each single record is occupied its own page. This is due to the fact that the record length has increased not only by the 1000 characters by which the length of the Col1 was modified but also the original 2000 bytes were wasted. there fore the data length on the page increased from 3008 bytes to 6008 bytes.

Now let’s take a closer look on the physical page. For example the first page occupied by the newly inserted data  (page 175).

Partial results of the DBCC command are here:

From the output above we can see, the now on the physical page, there is a Column 67108865 (DROPPED) following the Column1 and Column2 has physically moved to the end of the record at offset 0xbc4 (3012).  Also from the page dump we can see that the space from the offset 0x8 to 0x7d7 is containing a mess and that those 2000 bytes are wasted.

As we can see, the original query to sys.system_internals_partition_columns doesn’t show the Column with ID 67108865 (DROPPED). It’s because it uses join to the sys.columns and the DROPPED column is not part of the table, but is par tof the partition. If we use the query without join to the sys.columns it will be shown also in the query output.

It Seems that the dropped columns have IDs starting from 67108865 and the numbers increase as there are more dropped columns. Also the DROPPED columns have flag is_dropped = 1.

Multiple updates of the column size

In previous examples we took a look on the update to a single column and in Delaney’s post you can see the result if we want to update multiple column. In case we try to multiple columns, space allocated for all the original columns is dropped and new space allocated. But what happens if we update the same column multiple times? Let’s make a simple test.

From the example we can see, that each change which increase the fixed column length causes that the original column space is dropped and additional space is allocated.

Recovering the wasted space

It is great, that extending the column width of fixed length column is a metadata only operation as it is very quick and avoids blocking especially on large tables, but on the other side as we can see, this can cause a significant space wasting.

In case there will be less new inserts into the table than the current about of rows we do not need to take care about the wasted space much (from the point of wasted storage space) as the metadata change didn’t affect the current records and on the current records we are saving the space as we have extended the record length and only the new records inserted are wasting the space.

On the other side, if we know there will be a lot of inserts and reads of the newly inserted records, than it’s a good idea to recover the wasted space as the data will consume more space and further reads will have to read more unnecessary data.

In the comments to the original post there are some suggestions how to resolve the problem.

From my perspective if the table is already clustered, we do not need to crop and recreate the clustered index, but it is enough to REBUILD the index. The rebuild operation will reorder the data and free up the wasted space and also write the original data with new record length.

In the case of heap, creating and dropping clustered index will be quite costly operation which will move the data twice. On small tables this doesn’t matter but on larger amounts of data it will be better to do simple SELECT * INTO newTable from aTable and than simply drop the original table and rename the new one to the original one.

We can see, that the new table has no wasted space and all pages are now allocated by two records.

Of course if we there are some foreign keys and indexes than those will have to be recreated. Anyway this will cost much less I/O and processing power than creating and dropping the clustered key

 Conclusion

As we can see fro the examples above, altering the fixed column length and increasing it, even it’s metadata  only operation causes, that the originally allocated space is dropped and new space is allocated in the row for all newly inserted or updated rows in the table. So be carefull when altering a fixed length columns especially when you are doing multiple alters to a single column as significat space can be wasted. Knowing the fact that the dropped columns have flag is_dropped = 1 in the sys.system_internals_partition_columns, we can use a below query to list all the tables containing DROPPED columns with wasted space.

Jun 122012
 

Among lot of new features introduced in SQL Server 2012 also a new Windowing functions were introduced. The new functionality allow us to use the ORDER BY clause in the OVER clause with aggregate functions and also new ROWS and RANGE clauses were introduced to limit rows. The ORDER BY allow us define the order of rows processing and the ROWS/RANGE clauses put limits on the rows being processed in partition. All the details related to the OVER clause you can find on MSDN: OVER Clause (Transact-SQL).

ROWS/RANGE clause

The ROWS clause limits the rows in a parittion by specifying a fixed number of rows preceding or folowing the current rows. The rows preceeding and following are determined by the order specified in the ORDER BY clause.

The limit can be specified by serveral methods:

  • <unsigned integer> PRECEDING -fixed number of preceding rows
  • CURRENT ROW – representing current row being processed
  • UNBOUNDED PRECEDING – all previous records
  • <unsigned integer> FOLLOWING – fixed number of following rows
  • UNBOUNDED FOLLOWING – all rows following current row

So we can specify the limits like

The RANGE clause can be only used with the UNBOUNDED limit and CURRENT ROW. The difference between ROWS and RANGE clause is, that ROWS works with physical rows and RANGE works with range of rows based on the current row value in the terms of ORDER BY clause. This means that for ROWS clause the CURRENT ROW represents the only current row being processed. For RANGE the CURRENT ROW represents all the rows with the same value in the fields specified in the ORDER BY clause within current partition as the current row being processed. So if we use RANGE and multiple rows have the same rank in the terms of order within the partition, then all those rows will represent current row.

When there is no ROWS/RANGE clause specified after the ORDER BY clause, then the default RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is used by SQL Server.

Samples how to use the window functions

Let’s take a look on a few samples, how we can use the window functions and what results they will provide.

Test data preparation

To be able to test the new functionality

Window functions samples

If we try any of below queries they will provide the same results

Results are below and we can see, a correct cumulative balance is calculated.

ROWS clause with not unique order

Results will be the same as in previous example. They are the same because there is no parallelism and we have CLUSTERED INDEX on the TransactionID (otherwise the final order could be different because the order of rows with the same TransactionDate is not guaranteed here.

RANGE Clause with not unique order

Here we see, that the results are quite different. The final sum is the same, but the intermediate are not.

Here we can see, that the RANGE works as described above. All rows with the same value in the ORDER BY clause are considered as current row. Therefore for the dates ‘2000/01/01’ ,  ‘2000/01/05’ and ‘2000/01/15’ the values for each date are the same.

Working with FOLLOWING Rows

All the examples above worked with current row and all previous rows. Except this we can even work with rows following current row in particular order.

Here are a few other examples incorporating also FOLLOWING rows.

Example with Partitioning results

In previous examples we have worked with single partition. The OVER clause also allows partitioning the results. So let see some a few examples with partitioning.

When to use ROWS and when RANGE

Now we can ask, when we should use the ROWS clause and when RANGE clause to limit the rows. The answer comes fro the definition how the ROWS and RANGE clauses works. As described, ROWS works with each unique rows and RANGE handles multiple rows with the same order position as current row.

So in case the combination of fields specified in the ORDER BY clause does not uniquely specify the order of rows (as in case of examples above when only TransactionDate was used), then you should use RANGE, as the processing order of rows with the same order position is not guaranteed. In case the rows are uniquely identified, then ROWS should be used as there are no rows with the same order in the partition.

Conclusion

The new windowing functions brings new possibilities in writing T-SQL queries can simplify a lot of tasks which were problematic to write without these constructs. It allow us to bypass the recursive CTE, other solutions for calculation of running totals or averages without knocking down the server and also allow us to bypass quirky updates, CLR solutions which have some pitfalls when are used.

In my next post I will take a closer look on the Running Totals problem when using this new windowing functionality. Also I will take a closer look on the query plans produced by those constructs and give some advices for using them.

Jan 272012
 

Some times it may happen, that you need to remove accent (diacritics) from string when you are querying data or when you are storing data into particular table.

There are several methods how to achieve this. One of the possible solution is using a CONVERT with COLLATE as Twitted by MVP Mladen Prajdic and mentioned on Luke Jian blog, but this method has an issue I will mention below.

Other solution could be replacement of accented characters with the ones without accent. But this is a very problematic solution as you have to do a lot of replacement (the performance can suffer) and it will be problematic to cover all the possible accent characters and you easily miss one. Similar to this could be creation of a replacement table and implementation of function similar to Splitting function using Tally Table (Jeff Moden). But again you will have to write all possible combination of accented and not accented characters into the replacement table.

So if you do not want to change the collation and would like to avoid possible issues of the CONVERT method and avoid writing replacement tables etc., you can easily use a very simple amd known CLR method. This method is using string normalization and CharUnicodeInfo class from the System.Globalization name space. Based on this you can write a very simple scalar CLR function.

Once you compile above mentioned function into an .net assembly, you can register the assembly and function in your database.

Once the function is registered, you can easily use it to remove accent (diacritics) from whatever string you want

Which produces below result:

When you compare it to the CONVERT with COLLATION method, you can find that this doesn’t have the problem with non existen characters. As the CONVERT converts the characters to particular character set which does not need to contain all the charecters of the source character set. We can see this e.g. on the Norwegian, and Greece characters.

If we use the CONVERT with COLLATION method we receive:

Results:

From above we can see, that the CLR solution provides much better results and you do not loose any characters by conversion to non Unicode character set

If you are interested, you can grab sample C# project here: PPSqlClrSafe_AccentRemoval,zip

Any comments are welcomed.

Jan 242012
 

This article represents a custom components for transformation of columns inside SSIS Data Flow into XML.

If you are writing ETL processes using SSIS, you many times comes to situation, when you need log errors or other problems which can occur during ETL process. An example can be a failure of data conversion during the ETL process. In that situation you mostly would like to know that record exactly caused that error and log such conversion errors into particular error table.

Problem occurs when you have a lot of tables and want to see what data were converted and caused that particular error. As SSIS requires that the sources and destinations needs to be strongly mapped and you have different sources with different structures, this leads to a situation where you have a lot of different error tables – generally one per the source structure at which an error can occur.

This situation can be easily solved by a custom Data Flow component which will convert all source columns into an XML. In that situation you can only have one Error Table, which will contain e.g. Error Code, Error Column, Description and an XML Field in which will be all the data of particular row which caused error encoded in XML Form. This rapidly simplifies error logging and also error log monitoring as you need to monitor only a single table.

So the SSIS Data Flow with the Custom Columns to XML Transformation could look like the one below.

Columns To Xml Data FLow

ColumnsToXML PropertiesColumnsToXML Input Columns

You can provide a SourceID and SourceName properties which are then part of the produced XML so you can easily identify the source. In the Input Columns you select columns you want to be written to the XML. You can also specify an OutpuAlias and the the column names written into the XML will be the one specified in the OutputAlias property.

The Log written using that component and the XML produced will look like the ones below.

ColumnsToXML Log

Using the .value method of XML data type we can easily querying necessary information from the log table for whatever source in the table. So for our sample scenario we would like to query the Name, ValueInt and ValueDate columns stored as XML. This can be easily achieved by below SQL query:

As you can see from the samples above, you can create easily create single Error table for whatever ETL logging and simplify the logging rapidly as you#160; do not need to create the error tables with separate structures for each source.

ColumnsToXML Implementation

The ColumnsToXmlTransform is a custom implementation of the PipelineComponent. It is developed in C# for .NET framework 3.5 and uses Linq to XML for creating the XML output.

The component processes selected input columns, generates the XML and stores it in the output column called ColumnsXMLData.

Except the ColumsXMLDataSourceID and SourceName) to allow you to identify the source row source once it is written as XML into the database and simplifies querying for data coming from particular sources as showed above.

Once the component is built, it is necessary to install it into the Global Assembly Cache and to the PipelineComponents folder in the SQL Server Integration services installation path on the machine on which the SSIS using this component will be running.

To install it into a GAC you can use the gacutil.exe which is part of the Windows SDK.

Component is now available publicly with other components on GitLab as projects for SSIS 2008 – SSIS 2016.

If you run the development environment as with elevated Administrator privileges, the project contains Post Build events, which will install the assembly into the GAC (it assumes you have Windows SDK 7.0 installed – if you have newer version, modify the path to the gacutil.exe accordingly) and also copies the assembly into the codePipelineComponents/code folder of the default SQL Server Integration services installation. If you installation directory differ, please modify the paths in Post Build events accordingly.

Once you have compiled the code, installed in GAC and copied to the PipelineComponents folder, you can include the component to the data flow items palette. Right-click on the palette and select Choose Items

Choose Data Flow Transformation Items Pop-upChoose SSIS Data Flow Items

Once you select the component, it will appear on the toolbox and you can start using it.

Columns To XML On Palette

Once I will have more time, I will try to create an installer for the component. Anyway, any comments are welcome.

Nov 302011
 

Last few months I was quite busy at work and didn’t have much time to write blog posts. Also I was preparing for Microsoft Certification exam.

Finally, last  Wednesday I’ve successfully passed the Exam 70-448: TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance and become MCTS.

So I hope I will have more free time now and be able to publish some of my posts I have in a queue.

Anyway I would like to start with preparation for the 70-452 certification exam to become MCITP.

Nov 302011
 

Finally I had a time to make a second update to my post Querying Active Directory on SQL Server using CLR. This update should finally resolve an issue when you need to return large numbers of properties from Active Directory.

The solution is very simple by adding a possibility to specify the columns lengths (maximum length) for the returned properties. All the structures in the code left the same as they were originally. The only change is, that the list of properties is no longer passed as a comma separated list and now semicolon is used to delimit the properties. Comma can be used to provide a column length. If the length is not provided then the default of 4000 is used as it was before.

So don’t hesitate and check the updated article.

I’ve also provided a link to an updated Visual C# project with all the codes.

May 252011
 

In my earlier post SQL Server 2005 and SQL Server 2008 Regular Expressions and pattern matching I was focusing on the Regular Expressions matching on the SQL Server using a CLR.

This post was focused only on the matches itself, but didn’t cover another great feature of the Regular Expressions and this feature is Replacements.

When using the replacements then first the regular expression is executed on the source string and then on one, several or all matches a replacement is applied and the result of such replacement is returned.

Here are the CLR functions which handle the replacements.

RegExReplace

This is a scalar CLR function which handles a Regular Expression replacement. It allows to replace a single match from within a string or even replace all matches inside the string if matchID = 0 is passed as an argument.

RegExMatchesReplace

In contrast to RegExReplace the RegExMatchesReplace is a CLR table valued function, which returns all the matches as separated rows and also replacements. It returns a table with three columns MatchID which represents ID of the match from within the original string, Match which represents the particular RegEx match from within the string and Result, which is result of the replacement done on the Match.

So once the class is compiled into Assembly, we can register the functions inside SQL Server database.

Testing the Replace method

Here are some sample queries so you can imagine how the functions work.

Replace year in all Dates with year 2011
Remove the values from the Key=Value pairs and leave only the Key values separated by semicolons
Get list of all dates from within a string and replace the year of each date by 2011

From the examples we can see, that CLR give us a real power to using Regular Expressions on SQL Server. It allows us to make some complex transformation which could not be possible to do using a pure T-SQL and in case of transformation which can be done by pure T-SQL, mostly the CLR RegEx solution is much faster and straightforward.

Apr 212011
 

I’ve just posted an update to my article Querying Active Directory on SQL Server using CLR. I’ve added another 2 parameters to the samples which one of them allows control the Page Size limit for paged AD Search and the second one allows limit the number of rows returned to the client. This can be handful when the AD query can return more results than we need. In such case it’s easy to limit the number of rows.

I’ve updated the article after the comment from Chris, when he received an insufficient memory error when he wanted to return a higher number of fields from AD.

In this case the lowering the page size from 1000 to lower values should avoid this kind of problems.

Apr 132011
 

In my previous article Querying Active Directory on SQL Server using T-SQL I was showing the possibilities of querying the Active Directory using T-SQL, specifically using linked server and OPENQUERY statement or without linked server using the OPENROWSET statement.

This is an updated article and includes modification coming upon some of the comments to resolve some issues with large number of returned AD properties.

There ware mentioned some limitations of querying the AD using T-SQL and CLR will help us to bypass those limitations. Especially the limit of 1000 records returned as we can use pagination in the CLR code effectively.

For the purpose I’m creating a CLR Stored Procedure which will take several argument and return a result set. The reason I’m going to use a CLR Stored Procedure is, that stored procedure will allow me to return dynamic result set. It means I can specify properties of AD to return and those will be returned as columns of the result set. In case we go through the CLR Table Valued function, we had to create a separate function for each properties combination we would like to return.

So here is the CLR class for querying AD.

The class contains three public methods (CLR Stored Procedures) QueryAD, QueryADUName and QueryADAuth. The first one will query AD using current user credentials and default authentication method, second will query AD using provided user credentials and default authentication method and in the third one we can specify also authentication method.

Those methods call a private method SearchAD, which takes care about the AD Searching and return the returns the result to client.

After the comment from Chris, I’ve updated my sample codes to have also the optional parameter pageSize which allow reduce the size of a Page for Paged Search used for querying the AD and avoid insufficient memory problems when querying higher amount of AD attributes.

I’ve also added a parameter rowsLimit which allows limit the maximum number of rows returned, which can be useful especially when querying very large AD. There is added a break into the loop which iterates the results from AD when the number of precessed imtes reach the limit.

UPDATE:

Finally I’ve also updated the private static void SearchAD method so now you pass the properties not as a comma separated list but a semicolon separated list. You can use a comma to specify the return length of each property. If the length is not specified, then the method will use a default 4000 characters length. This update should finally solve the issues with large number of properties returned when previously all were returned as nvarchar(4000)

The updated part of the code is the for loop which is processing properties and generating result set metadata, starting with the comment //Trim properties and prepare result set metadata, also process specified lengths

END OF UPDATE:

To be able to compile the code for example using Visual C# Express it is necessary to add reference to the System.DirectoryServices assembly in the project.

Once we compile the code and create say ADServices.dll assembly, we can register that assembly and CLR Stored procedures in our database.

Because the ActiveDirectory class is using System.DirectoryServices assembly, we will have to use UNSAFE PERMISSION_SET for our assembly and so the database using that assembly has to be TRUSTWORTHY.

As we are using the System.DirectoryServices assembly, we have to register it in our database prior registering our assembly, otherwise we will not be able to register it.

Once we have successfully registered those assemblies, we can create the CLR Stored procedures in our DB.

After successful creation of the stored procedures we  can start querying the AD: The filter parameter is a LDAP filter according the Search Filter Syntax.

As we use a stored procedure for querying AD, then we cannot work directly with the result further (OK.. On Denali it will be possible thanks to the EXECUTE WITH RESULT SETS. On SQL Server 2005 and 2008 we could store the results e.g. to table variable to temp table and then work with the results as normally. From the CLR code we can see, that the result set contains all the AD properties we have passed as parameter and the order is exactly the same as in the input parameter.  The data type of each returned column is nvarchar with length which was specified in the properties list. If no length was specified or the length was less than one or grater than 4000 then the return type is nvarchar(4000).

From the examples above we can see, that once we create CLR stored procedures for querying the AD, the queries to AD are quite easy. Comparing the the T-SQL and Linked Server or OPENROWSET solution we have much greater possibilities and what is most important, we are not limited to 1000 results from our query, so we can easily query all the object in AD.

If you are interested, you can download sample ADServices Visual C# 2010 Express project. It contains all the latest updates I have mentioned.

Here you can download an updated sample PPSqlDirectoryServices Visual C# 2010 project with all the lasted updates implemented.