Jan 082014
 

On mssqltips.com you can find my post Exporting CLR Assemblies from SQL Server back to .dll files.

This post describes how you can export the assemblies from SQL server back to physical .dll files. This can be very useful in cases when you do not have the source code for he assemblies inside the DB and you would like to do a security review of the assembly. This commonly happens if you receive the assembly as BYTE code in T-SQL.

Jun 272012
 

Recently I was going through the archives of posts by Kalen Delaney blog and I came across very interesting post Did you know? — Altering the length of a fixed-length column.  This article gives information how SQL Server is wasting space when you alter a fixed length column and increase its length.

It is interesting so I wanted to take a closer look on this and take a look on the physical db pages to see what happens and to see how the data are stored after alter is done.

Test data preparation and initial view of the data

So first let’s prepare a testing table with some testing data.

CREATE TABLE dbo.AlterTest (
    ID int NOT NULL IDENTITY(1,1),
    col1 char(2000),
    col2 char(1000),
    col3 int
)
GO
INSERT INTO dbo.AlterTest (col1, col2, col3)
VALUES('aaa', 'bbb', ABS(BINARY_CHECKSUM(NEWID())))
GO 4

Now if we take a look on the columns information in metadata tables using Kalen’s query to sys.system_internal_partition_columns we can see the offsets of the table columns.

SELECT  
    c.name AS column_name, 
    column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
INNER JOIN sys.columns c ON column_id = partition_column_id AND c.object_id = p.object_id
WHERE p.object_id=object_id('AlterTest');
column_name   column_id   max_inrow_length system_type_id leaf_offset
------------- ----------- ---------------- -------------- -----------
ID            1           4                56             4
col1          2           2000             175            8
col2          3           1000             175            2008
col3          4           4                56             3008

As we can see, the physical order is the order in which the columns were defined using the CREATE TABLE statement.

Once we have the data in the table, let’s take a look on how the data are stored. The below query will work only on SQL Server 2008+ and is using undocumented virtual column %%physloc%% which provides information about rows physical location in database and undocumented function sys.fn_PhysLocCracker, which cracks the physical location to human readable FileID, PageID and SlotID.

SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
DBID   file_id     page_id     slot_id     ID          col1    col2    col3
------ ----------- ----------- ----------- ----------- ------- ------ -----------
16     1           168         0           1           aaa     bbb    587800818
16     1           168         1           2           aaa     bbb    1396332306
16     1           171         0           3           aaa     bbb    844570652
16     1           171         1           4           aaa     bbb    1332601405

Let’s take a look on the first physical page 168.

dbcc traceon (3604,-1)
GO
dbcc page(16,1,168,3)
GO

The partial results are below

PAGE: (1:168)

BUFFER:

BUF @0x00000005011003C0

bpage = 0x000000048C308000          bhash = 0x0000000000000000          bpageno = (1:168)
bdbid = 16                          breferences = 0                     bcputicks = 0
bsampleCount = 0                    bUse1 = 44191                       bstat = 0xb
blog = 0xab21cccc                   bnext = 0x0000000000000000          

PAGE HEADER:

Page @0x000000048C308000

m_pageId = (1:168)                  m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                         m_flagBits = 0x8000
m_objId (AllocUnitId.idObj) = 98    m_indexId (AllocUnitId.idInd) = 256 
Metadata: AllocUnitId = 72057594044350464                                
Metadata: PartitionId = 72057594039959552                                Metadata: IndexId = 0
Metadata: ObjectId = 549576996      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 3012                      m_slotCnt = 2                       m_freeCnt = 2062
m_freeData = 6126                   m_reservedCnt = 0                   m_lsn = (37:227:3)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt = 0
m_tornBits = 0                      DB Frag ID = 1                      

Allocation Status

GAM (1:2) = ALLOCATED               SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x62 MIXED_EXT ALLOCATED  80_PCT_FULL                        DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED           

Slot 0 Offset 0x60 Length 3015

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP    Record Size = 3015

Memory Dump @0x000000002054A060

0000000000000000:   1000c40b 01000000 61616120 20202020 20202020  ..Ä.....aaa         
0000000000000014:   20202020 20202020 20202020 20202020 20202020                      
0000000000000028:   20202020 20202020 20202020 20202020 20202020                      
...
...
00000000000007BC:   20202020 20202020 20202020 20202020 20202020                      
00000000000007D0:   20202020 20202020 62626220 20202020 20202020          bbb         
00000000000007E4:   20202020 20202020 20202020 20202020 20202020                      
...
...
0000000000000BA4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BB8:   20202020 20202020 f2200923 040000                     ò 	#...

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

ID = 1                              

Slot 0 Column 2 Offset 0x8 Length 2000 Length (physical) 2000

col1 = aaa                                                                                                       

Slot 0 Column 3 Offset 0x7d8 Length 1000 Length (physical) 1000

col2 = bbb                                                                                                       

Slot 0 Column 4 Offset 0xbc0 Length 4 Length (physical) 4

col3 = 587800818                    

Slot 1 Offset 0xc27 Length 3015

We can see, that the physical storage corresponds to the metadata stored in the system table sys.system_internal_partition_columns.

Altering the column length and analyzing impacts

Now let’s alter the table and increase the length of Col1 to 3000 characters and take a look what happens.

ALTER TABLE dbo.AlterTest
    ALTER COLUMN Col1 char(3000)
GO

If we take a look on the sys.system_iternal_partition_columns we will see following:

column_name   column_id   max_inrow_length system_type_id leaf_offset
------------- ----------- ---------------- -------------- -----------
ID            1           4                56             4
col2          3           1000             175            2008
col3          4           4                56             3008
col1          2           3000             175            3012

We can see, that the offset of the Col1 has changed and the Column was moved to the end of the record. From here we can see, that the original 2000 bytes were wasted. If we take a look on the physical page 168 as above, we will see, that there is no change in the physical page as the this change to the column is metadata change only.

Impact on new records

So let’s take a look what impact this have on new records added to the table.

INSERT INTO dbo.AlterTest (col1, col2, col3)
VALUES('ccc', 'ddd', ABS(BINARY_CHECKSUM(NEWID())))
GO 4
SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
DBID   file_id     page_id     slot_id     ID          col1   col2  col3
------ ----------- ----------- ----------- ----------- ------ ----- -----------
16     1           168         0           1           aaa    bbb   587800818
16     1           168         1           2           aaa    bbb   1396332306
16     1           171         0           3           aaa    bbb   844570652
16     1           171         1           4           aaa    bbb   1332601405
16     1           175         0           5           ccc    ddd   2059368981
16     1           177         0           6           ccc    ddd   1449062892
16     1           178         0           7           ccc    ddd   267569086
16     1           179         0           8           ccc    ddd   1325350591

From the results we can see, that the first 4 records originally inserted are occupied only two pages as two records were stored per database page. After the update we can see that each single record is occupied its own page. This is due to the fact that the record length has increased not only by the 1000 characters by which the length of the Col1 was modified but also the original 2000 bytes were wasted. there fore the data length on the page increased from 3008 bytes to 6008 bytes.

Now let’s take a closer look on the physical page. For example the first page occupied by the newly inserted data  (page 175).

dbcc page(16,1,175,3)
GO

Partial results of the DBCC command are here:

PAGE: (1:175)

BUFFER:

BUF @0x0000000503366540

bpage = 0x00000004D7F84000          bhash = 0x000000067753ED81          bpageno = (1:175)
bdbid = 16                          breferences = 0                     bcputicks = 0
bsampleCount = 0                    bUse1 = 45327                       bstat = 0x10b
blog = 0x212121cc                   bnext = 0x0000000000000000          

PAGE HEADER:

Page @0x00000004D7F84000

m_pageId = (1:175)                  m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                         m_flagBits = 0x8000
m_objId (AllocUnitId.idObj) = 98    m_indexId (AllocUnitId.idInd) = 256 
Metadata: AllocUnitId = 72057594044350464                                
Metadata: PartitionId = 72057594039959552                                Metadata: IndexId = 0
Metadata: ObjectId = 549576996      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 6012                      m_slotCnt = 1                       m_freeCnt = 2079
m_freeData = 6111                   m_reservedCnt = 0                   m_lsn = (37:390:11)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt = 0
m_tornBits = 0                      DB Frag ID = 1                      

Allocation Status

GAM (1:2) = ALLOCATED               SGAM (1:3) = NOT ALLOCATED          
PFS (1:1) = 0x62 MIXED_EXT ALLOCATED  80_PCT_FULL                        DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED           

Slot 0 Offset 0x60 Length 6015

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP    Record Size = 6015

Memory Dump @0x000000002267A060

0000000000000000:   10007c17 05000000 df5af8d7 04000000 ffffffff  ..|.....ßZø×....ÿÿÿÿ
0000000000000014:   ffffff7f 9860f8d7 04000000 0040f8d7 04000000  ÿÿÿ..`ø×.....@ø×....
0000000000000028:   01000000 00000000 63000000 00000000 dd7618f4  ........c.......Ýv.ô
0000000000000794:   00000000 00000000 00000000 00000000 01000000  ....................
00000000000007A8:   000071d8 fe070000 c072210e 00000000 00000000  ..qØþ...Àr!.........
00000000000007BC:   00000000 00000000 00000000 00000000 00000000  ....................
00000000000007D0:   00000000 00000000 64646420 20202020 20202020  ........ddd         
00000000000007E4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BA4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BB8:   20202020 20202020 157abf7a 63636320 20202020          .z¿zccc     
0000000000000BCC:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BE0:   20202020 20202020 20202020 20202020 20202020                      
000000000000175C:   20202020 20202020 20202020 20202020 20202020                      
0000000000001770:   20202020 20202020 20202020 050000                         ...

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

ID = 5                              

Slot 0 Column 67108865 Offset 0x8 Length 0 Length (physical) 2000

DROPPED = NULL                      

Slot 0 Column 3 Offset 0x7d8 Length 1000 Length (physical) 1000

col2 = ddd                                                                                                       

Slot 0 Column 4 Offset 0xbc0 Length 4 Length (physical) 4

col3 = 2059368981                   

Slot 0 Column 2 Offset 0xbc4 Length 3000 Length (physical) 3000

col1 = ccc

From the output above we can see, the now on the physical page, there is a Column 67108865 (DROPPED) following the Column1 and Column2 has physically moved to the end of the record at offset 0xbc4 (3012).  Also from the page dump we can see that the space from the offset 0x8 to 0x7d7 is containing a mess and that those 2000 bytes are wasted.

As we can see, the original query to sys.system_internals_partition_columns doesn’t show the Column with ID 67108865 (DROPPED). It’s because it uses join to the sys.columns and the DROPPED column is not part of the table, but is par tof the partition. If we use the query without join to the sys.columns it will be shown also in the query output.

SELECT 
    partition_column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset,
    is_dropped 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
WHERE p.object_id=object_id('AlterTest');
partition_column_id max_inrow_length system_type_id leaf_offset is_dropped
------------------- ---------------- -------------- ----------- ---------
1                   4                56             4           0
67108865            2000             175            8           1
3                   1000             175            2008        0
4                   4                56             3008        0
2                   3000             175            3012        0

It Seems that the dropped columns have IDs starting from 67108865 and the numbers increase as there are more dropped columns. Also the DROPPED columns have flag is_dropped = 1.

Multiple updates of the column size

In previous examples we took a look on the update to a single column and in Delaney’s post you can see the result if we want to update multiple column. In case we try to multiple columns, space allocated for all the original columns is dropped and new space allocated. But what happens if we update the same column multiple times? Let’s make a simple test.

CREATE TABLE AlterTest2 (
	id int not null identity(1,1),
	Col1 char(1000),
	Col2 char(1000),
	Col3 int
)
GO
ALTER TABLE AlterTest2
ALTER COLUMN Col1 char(2000)
GO
ALTER TABLE AlterTest2
ALTER COLUMN Col1 char(2500)
GO
SELECT 
    partition_column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset,
    is_dropped 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
WHERE p.object_id=object_id('AlterTest2');
partition_column_id max_inrow_length system_type_id leaf_offset is_dropped
------------------- ---------------- -------------- ----------- ----------
1                   4                56             4           0
67108865            1000             175            8           1
3                   1000             175            1008        0
4                   4                56             2008        0
67108866            2000             175            2012        1
2                   2500             175            4012        0

From the example we can see, that each change which increase the fixed column length causes that the original column space is dropped and additional space is allocated.

Recovering the wasted space

It is great, that extending the column width of fixed length column is a metadata only operation as it is very quick and avoids blocking especially on large tables, but on the other side as we can see, this can cause a significant space wasting.

In case there will be less new inserts into the table than the current about of rows we do not need to take care about the wasted space much (from the point of wasted storage space) as the metadata change didn’t affect the current records and on the current records we are saving the space as we have extended the record length and only the new records inserted are wasting the space.

On the other side, if we know there will be a lot of inserts and reads of the newly inserted records, than it’s a good idea to recover the wasted space as the data will consume more space and further reads will have to read more unnecessary data.

In the comments to the original post there are some suggestions how to resolve the problem.

From my perspective if the table is already clustered, we do not need to crop and recreate the clustered index, but it is enough to REBUILD the index. The rebuild operation will reorder the data and free up the wasted space and also write the original data with new record length.

In the case of heap, creating and dropping clustered index will be quite costly operation which will move the data twice. On small tables this doesn’t matter but on larger amounts of data it will be better to do simple SELECT * INTO newTable from aTable and than simply drop the original table and rename the new one to the original one.

SELECT
*
INTO dbo.AlterTest2
FROM dbo.AlterTest
GO
SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest2 t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
GO
DBID   file_id     page_id     slot_id     ID          col1  col2  col3
------ ----------- ----------- ----------- ----------- ----- ----- -----------
16     1           411         0           1           aaa   bbb   587800818
16     1           411         1           2           aaa   bbb   1396332306
16     1           412         0           3           aaa   bbb   844570652
16     1           412         1           4           aaa   bbb   1332601405
16     1           413         0           5           ccc   ddd   2059368981
16     1           413         1           6           ccc   ddd   1449062892
16     1           414         0           7           ccc   ddd   267569086
16     1           414         1           8           ccc   ddd   1325350591

We can see, that the new table has no wasted space and all pages are now allocated by two records.

Of course if we there are some foreign keys and indexes than those will have to be recreated. Anyway this will cost much less I/O and processing power than creating and dropping the clustered key

 Conclusion

As we can see fro the examples above, altering the fixed column length and increasing it, even it’s metadata  only operation causes, that the originally allocated space is dropped and new space is allocated in the row for all newly inserted or updated rows in the table. So be carefull when altering a fixed length columns especially when you are doing multiple alters to a single column as significat space can be wasted. Knowing the fact that the dropped columns have flag is_dropped = 1 in the sys.system_internals_partition_columns, we can use a below query to list all the tables containing DROPPED columns with wasted space.

SELECT distinct
	p.partition_id,
	p.object_id,
	o.name as table_name
FROM sys.system_internals_partitions p
INNER JOIN sys.objects o on p.object_id = o.object_id
INNER JOIN sys.system_internals_partition_columns pc ON p.partition_id = pc.partition_id
WHERE pc.is_dropped = 1
partition_id         object_id   table_name
-------------------- ----------- -----------
72057594040025088    1253579504  AlterTest2
72057594040090624    1269579561  AlterTest
Jun 122012
 

Among lot of new features introduced in SQL Server 2012 also a new Windowing functions were introduced. The new functionality allow us to use the ORDER BY clause in the OVER clause with aggregate functions and also new ROWS and RANGE clauses were introduced to limit rows. The ORDER BY allow us define the order of rows processing and the ROWS/RANGE clauses put limits on the rows being processed in partition. All the details related to the OVER clause you can find on MSDN: OVER Clause (Transact-SQL).

ROWS/RANGE clause

The ROWS clause limits the rows in a parittion by specifying a fixed number of rows preceding or folowing the current rows. The rows preceeding and following are determined by the order specified in the ORDER BY clause.

The limit can be specified by serveral methods:

  • <unsigned integer> PRECEDING -fixed number of preceding rows
  • CURRENT ROW – representing current row being processed
  • UNBOUNDED PRECEDING – all previous records
  • <unsigned integer> FOLLOWING – fixed number of following rows
  • UNBOUNDED FOLLOWING – all rows following current row

So we can specify the limits like

ROWS BETWEEN 3 PRECEEDING AND 1 FOLLOWING
ROWS BETWEEN UNBOUNDED PRECEEDING AND CURRENT ROW
ROWS BETWEEN CURRENT ROW AND 3 FOLLOWING

RANGE BETWEEN UNBOUNDED PRECEEDING AND CURRENT ROW
RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
RENGE CURRENT ROW

The RANGE clause can be only used with the UNBOUNDED limit and CURRENT ROW. The difference between ROWS and RANGE clause is, that ROWS works with physical rows and RANGE works with range of rows based on the current row value in the terms of ORDER BY clause. This means that for ROWS clause the CURRENT ROW represents the only current row being processed. For RANGE the CURRENT ROW represents all the rows with the same value in the fields specified in the ORDER BY clause within current partition as the current row being processed. So if we use RANGE and multiple rows have the same rank in the terms of order within the partition, then all those rows will represent current row.

When there is no ROWS/RANGE clause specified after the ORDER BY clause, then the default RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is used by SQL Server.

Samples how to use the window functions

Let’s take a look on a few samples, how we can use the window functions and what results they will provide.

Test data preparation

To be able to test the new functionality

--======================
-- Create test database
--======================
CREATE DATABASE WindowFunctionsTest
GO
USE WindowFunctionsTest
GO 

--Create Testing Tables
CREATE TABLE [dbo].[Accounts](
	[TransactionID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
	[TransactionDate] [datetime] NULL,
	[Balance] [float] NULL
)
GO
CREATE TABLE [dbo].[MultiAccounts](
	[TransactionID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
	[AccountID] [int] NOT NULL,
	[TransactionDate] [datetime] NULL,
	[Balance] [float] NULL
)
GO
--Fill test tables with data
INSERT INTO [dbo].[Accounts](
	[TransactionDate],
	[Balance]
)
SELECT '2000-1-1', 100 UNION ALL
SELECT '2000-1-1', -50 UNION ALL
SELECT '2000-1-2', 200 UNION ALL
SELECT '2000-1-3', 500 UNION ALL
SELECT '2000-1-4', -200 UNION ALL
SELECT '2000-1-5', 1000 UNION ALL
SELECT '2000-1-5', -300 UNION ALL
SELECT '2000-1-6', -300 UNION ALL
SELECT '2000-1-7', -200 UNION ALL
SELECT '2000-1-8', 2000 UNION ALL
SELECT '2000-1-9', 100 UNION ALL
SELECT '2000-1-10', -50 UNION ALL
SELECT '2000-1-10', 500 UNION ALL
SELECT '2000-1-11', 200 UNION ALL
SELECT '2000-1-12', 200 UNION ALL
SELECT '2000-1-13', 1000 UNION ALL
SELECT '2000-1-14', 1000 UNION ALL
SELECT '2000-1-15', -500 UNION ALL
SELECT '2000-1-15', -300 UNION ALL
SELECT '2000-1-16', 1000 UNION ALL
SELECT '2000-1-17', 1000 UNION ALL
SELECT '2000-1-18', -800 UNION ALL
SELECT '2000-1-19', 2000 UNION ALL
SELECT '2000-1-20', -1000
GO

INSERT [dbo].[MultiAccounts] (
	[AccountID],
	[TransactionDate],
	[Balance]
)
SELECT 1, '2000-1-1', 100 UNION ALL
SELECT 1, '2000-1-1', -50 UNION ALL
SELECT 1, '2000-1-2', 200 UNION ALL
SELECT 1, '2000-1-3', 500 UNION ALL
SELECT 1, '2000-1-4', -200 UNION ALL
SELECT 1, '2000-1-5', 1000 UNION ALL
SELECT 1, '2000-1-5', -300 UNION ALL
SELECT 1, '2000-1-6', -300 UNION ALL
SELECT 1, '2000-1-7', -200 UNION ALL
SELECT 2, '2000-1-1', 2000 UNION ALL
SELECT 2, '2000-1-2', 100 UNION ALL
SELECT 2, '2000-1-3', -50 UNION ALL
SELECT 2, '2000-1-4', 500 UNION ALL
SELECT 2, '2000-1-5', 200 UNION ALL
SELECT 2, '2000-1-6', 200 UNION ALL
SELECT 2, '2000-1-7', 1000 UNION ALL
SELECT 2, '2000-1-7', 1000 UNION ALL
SELECT 3, '2000-1-1', 800 UNION ALL
SELECT 3, '2000-1-2', -300 UNION ALL
SELECT 3, '2000-1-3', 1000 UNION ALL
SELECT 3, '2000-1-4', 1000 UNION ALL
SELECT 3, '2000-1-5', -800 UNION ALL
SELECT 3, '2000-1-6', 2000 UNION ALL
SELECT 3, '2000-1-7', -1000
GO

Window functions samples

If we try any of below queries they will provide the same results

--Using the ROWS clause
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO
--The same as abowe the ROWS UNBOUNDED PRECEDING will be completed by SQL Server to ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
--If we specify only the left boundary, SQL Servers automaticaly fills the right BOUNDARY
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID ROWS UNBOUNDED PRECEDING ) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO
--Using the RANGE Clause
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO
--The same as above as RANGE UNBOUNDED PRECEDING AND CURRENT ROW will be complete by SQL Server as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
--If we specify only the left boundary, SQL Servers automaticaly fills the right BOUNDARY
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID RANGE UNBOUNDED PRECEDING ) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO
--NO ROWS/RANGE Clause (SQL Server will use the DEFAULT RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO

Results are below and we can see, a correct cumulative balance is calculated.

TransactionID TransactionDate         Balance                CummulativeBalance
------------- ----------------------- ---------------------- ----------------------
1             2000-01-01 00:00:00.000 100                    100
2             2000-01-01 00:00:00.000 -50                    50
3             2000-01-02 00:00:00.000 200                    250
4             2000-01-03 00:00:00.000 500                    750
5             2000-01-04 00:00:00.000 -200                   550
6             2000-01-05 00:00:00.000 1000                   1550
.             .                       .                      .
.             .                       .                      .
.             .                       .                      .
20            2000-01-16 00:00:00.000 1000                   5900
21            2000-01-17 00:00:00.000 1000                   6900
22            2000-01-18 00:00:00.000 -800                   6100
23            2000-01-19 00:00:00.000 2000                   8100
24            2000-01-20 00:00:00.000 -1000                  7100

ROWS clause with not unique order

SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate
GO

Results will be the same as in previous example. They are the same because there is no parallelism and we have CLUSTERED INDEX on the TransactionID (otherwise the final order could be different because the order of rows with the same TransactionDate is not guaranteed here.

TransactionID TransactionDate         Balance                CummulativeBalance
------------- ----------------------- ---------------------- ----------------------
1             2000-01-01 00:00:00.000 100                    100
2             2000-01-01 00:00:00.000 -50                    50
3             2000-01-02 00:00:00.000 200                    250
4             2000-01-03 00:00:00.000 500                    750
5             2000-01-04 00:00:00.000 -200                   550
6             2000-01-05 00:00:00.000 1000                   1550
7             2000-01-05 00:00:00.000 -300                   1250
.             .                       .                      .
.             .                       .                      .
.             .                       .                      .
20            2000-01-16 00:00:00.000 1000                   5900
21            2000-01-17 00:00:00.000 1000                   6900
22            2000-01-18 00:00:00.000 -800                   6100
23            2000-01-19 00:00:00.000 2000                   8100
24            2000-01-20 00:00:00.000 -1000                  7100

RANGE Clause with not unique order

SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID
GO

Here we see, that the results are quite different. The final sum is the same, but the intermediate are not.

TransactionID TransactionDate         Balance                CummulativeBalance
------------- ----------------------- ---------------------- ----------------------
1             2000-01-01 00:00:00.000 100                    50
2             2000-01-01 00:00:00.000 -50                    50
3             2000-01-02 00:00:00.000 200                    250
4             2000-01-03 00:00:00.000 500                    750
5             2000-01-04 00:00:00.000 -200                   550
6             2000-01-05 00:00:00.000 1000                   1250
7             2000-01-05 00:00:00.000 -300                   1250
8             2000-01-06 00:00:00.000 -300                   950
.             .                       .                      .
.             .                       .                      .
.             .                       .                      .
17            2000-01-14 00:00:00.000 1000                   5700
18            2000-01-15 00:00:00.000 -500                   4900
19            2000-01-15 00:00:00.000 -300                   4900
20            2000-01-16 00:00:00.000 1000                   5900
21            2000-01-17 00:00:00.000 1000                   6900
22            2000-01-18 00:00:00.000 -800                   6100
23            2000-01-19 00:00:00.000 2000                   8100
24            2000-01-20 00:00:00.000 -1000                  7100

Here we can see, that the RANGE works as described above. All rows with the same value in the ORDER BY clause are considered as current row. Therefore for the dates ‘2000/01/01′ ,  ‘2000/01/05′ and ‘2000/01/15′ the values for each date are the same.

Working with FOLLOWING Rows

All the examples above worked with current row and all previous rows. Except this we can even work with rows following current row in particular order.

Here are a few other examples incorporating also FOLLOWING rows.

--Sum of current row and all following rows
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID

--SUM of 1 preceding, current and one following row
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (ORDER BY TransactionDate, TransactionID ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS CummulativeBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID

--SUM of all rows in each row
SELECT
    [TransactionID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER () AS FinalBalance
FROM [dbo].[Accounts]
ORDER BY TransactionDate, TransactionID

Example with Partitioning results

In previous examples we have worked with single partition. The OVER clause also allows partitioning the results. So let see some a few examples with partitioning.

--Sum of current row and all following rows partitioned, by AccountID
SELECT
    [TransactionID]
	,[AccountID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (PARTITION BY [AccountID] ORDER BY TransactionDate, TransactionID ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS CummulativeBalance
FROM [dbo].[MultiAccounts]
ORDER BY AccountID, TransactionDate, TransactionID

--SUM of 1 preceding, current and one following row, partitioned by AccountID
SELECT
    [TransactionID]
	,[AccountID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (PARTITION BY [AccountID] ORDER BY TransactionDate, TransactionID ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS CummulativeBalance
FROM [dbo].[MultiAccounts]
ORDER BY AccountID, TransactionDate, TransactionID

--SUM of all rows in each row
SELECT
    [TransactionID]
	,[AccountID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (PARTITION BY [AccountID]) AS FinalBalance
FROM [dbo].[MultiAccounts]
ORDER BY AccountID, TransactionDate, TransactionID

--SUM of all preceeding and current row, partitioned by AccountID order is based only on TransactionDate - using RANGE
SELECT
    [TransactionID]
	,[AccountID]
    ,[TransactionDate]
    ,[Balance]
    ,SUM(Balance) OVER (PARTITION BY [AccountID] ORDER BY TransactionDate RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CummulativeBalance
FROM [dbo].[MultiAccounts]
ORDER BY AccountID, TransactionDate

When to use ROWS and when RANGE

Now we can ask, when we should use the ROWS clause and when RANGE clause to limit the rows. The answer comes fro the definition how the ROWS and RANGE clauses works. As described, ROWS works with each unique rows and RANGE handles multiple rows with the same order position as current row.

So in case the combination of fields specified in the ORDER BY clause does not uniquely specify the order of rows (as in case of examples above when only TransactionDate was used), then you should use RANGE, as the processing order of rows with the same order position is not guaranteed. In case the rows are uniquely identified, then ROWS should be used as there are no rows with the same order in the partition.

Conclusion

The new windowing functions brings new possibilities in writing T-SQL queries can simplify a lot of tasks which were problematic to write without these constructs. It allow us to bypass the recursive CTE, other solutions for calculation of running totals or averages without knocking down the server and also allow us to bypass quirky updates, CLR solutions which have some pitfalls when are used.

In my next post I will take a closer look on the Running Totals problem when using this new windowing functionality. Also I will take a closer look on the query plans produced by those constructs and give some advices for using them.

Jan 272012
 

Some times it may happen, that you need to remove accent (diacritics) from string when you are querying data or when you are storing data into particular table.

There are several methods how to achieve this. One of the possible solution is using a CONVERT with COLLATE as Twitted by MVP Mladen Prajdic and mentioned on Luke Jian blog, but this method has an issue I will mention below.

Other solution could be replacement of accented characters with the ones without accent. But this is a very problematic solution as you have to do a lot of replacement (the performance can suffer) and it will be problematic to cover all the possible accent characters and you easily miss one. Similar to this could be creation of a replacement table and implementation of function similar to Splitting function using Tally Table (Jeff Moden). But again you will have to write all possible combination of accented and not accented characters into the replacement table.

So if you do not want to change the collation and would like to avoid possible issues of the CONVERT method and avoid writing replacement tables etc., you can easily use a very simple amd known CLR method. This method is using string normalization and CharUnicodeInfo class from the System.Globalization name space. Based on this you can write a very simple scalar CLR function.

using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
using System.Globalization;

public class StringsCommon
{
    /// <summary>
    /// Removes Accent (Diacritics) from string
    /// </summary>
    /// <param name="sourceString">Source string fro wchich accent should be removed</param>
    /// <returns>string without accent</returns>
    [SqlFunction(IsDeterministic = true)]
    public static SqlChars RemoveAccent(SqlString sourceString)
    {
        if (sourceString.IsNull)
            return SqlChars.Null;

        string normalized = sourceString.Value.Normalize(NormalizationForm.FormD);

        StringBuilder output = new StringBuilder(sourceString.Value.Length);

        foreach (char ch in normalized)
        {
            if (CharUnicodeInfo.GetUnicodeCategory(ch) != UnicodeCategory.NonSpacingMark)
                output.Append(ch);
        }
        return new SqlChars(output.ToString());
    }
}

Once you compile above mentioned function into an .net assembly, you can register the assembly and function in your database.

CREATE ASSEMBLY [PPSqlClrSafe]
AUTHORIZATION [dbo]
FROM 'C:CLRPPSqlClrSafe.dll'
WITH PERMISSION_SET = SAFE
GO

CREATE FUNCTION [ClrSafe].[fn_RemoveAccent](
	@sourceString [nvarchar](max)  --Source string to remove accent
)
RETURNS [nvarchar](4000)
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [PPSqlClrSafe].[StringsCommon].[RemoveAccent]
GO

Once the function is registered, you can easily use it to remove accent (diacritics) from whatever string you want

WITH Data AS (
    SELECT
        'Czech' AS [Language]
        ,N'á, é, í, ó, ú, ý, č, ď, ě, ň, ř, š, ť, ž, ů' AS [AccentChars] UNION ALL
    SELECT
        'Norwegian' AS [Language]
        ,N'ø' AS [AccentChars] UNION ALL
    SELECT
        'Estonian' AS [Language]
        ,N'õ' AS [AccentChars] UNION ALL
    SELECT
        'French' AS [Language]
        ,N'à, â, ç, é, è, ê, ë, î, ï, ô, ù, û, ü' AS [AccentChars] UNION ALL
    SELECT
        'Croatian, Vietnamese' AS [Language]
        ,N'đ' AS [AccentChars] UNION ALL
    SELECT
        'Latvian' AS [Language]
        ,N'ā, ē, ī, ū, ļ, ķ, ņ, ģ' AS [AccentChars] UNION ALL
    SELECT
        'Hungarian' AS [Language]
        ,N'ö, ü, ő, ű' AS [AccentChars] UNION ALL
    SELECT
        'Polish' AS [Language]
        ,N'ą, ć, ę, ń, ó, ś, ź, ż' AS [AccentChars] UNION ALL
    SELECT
        'Romanian' AS [Language]
        ,N'ă, â, î, ș, ț' AS [AccentChars] UNION ALL
    SELECT
        'Slovak' AS [Language]
        ,N'ô, ť, ľ, ŕ, ĺ, ä' AS [AccentChars] UNION ALL
    SELECT
        'Spanish' AS [Language]
        ,N'ñ' AS [AccentChars] UNION ALL
    SELECT
        'Swedish' AS [Language]
        ,N'å' AS [AccentChars] UNION ALL
    SELECT
        'Turkish' AS [Language]
        ,N'ç, ş, ğ' AS [AccentChars] UNION ALL
    SELECT
        'Greece' AS [Language]
        ,N'Γ, δ, ξ, Φ' AS [AccentChars]
)
SELECT
    [Language]
    ,[AccentChars]
    ,[ClrSafe].fn_RemoveAccent([AccentChars]) AS [RemovedAccent]
FROM Data

Which produces below result:

Language             AccentChars                                 RemovedAccent
-------------------- ------------------------------------------- --------------------------------------------
Czech                á, é, í, ó, ú, ý, č, ď, ě, ň, ř, š, ť, ž, ů a, e, i, o, u, y, c, d, e, n, r, s, t, z, u
Norwegian            ø                                           ø
Estonian             õ                                           o
French               à, â, ç, é, è, ê, ë, î, ï, ô, ù, û, ü       a, a, c, e, e, e, e, i, i, o, u, u, u
Croatian, Vietnamese đ                                           đ
Latvian              ā, ē, ī, ū, ļ, ķ, ņ, ģ                      a, e, i, u, l, k, n, g
Hungarian            ö, ü, ő, ű                                  o, u, o, u
Polish               ą, ć, ę, ń, ó, ś, ź, ż                      a, c, e, n, o, s, z, z
Romanian             ă, â, î, ș, ț                               a, a, i, s, t
Slovak               ô, ť, ľ, ŕ, ĺ, ä                            o, t, l, r, l, a
Spanish              ñ                                           n
Swedish              å                                           a
Turkish              ç, ş, ğ                                     c, s, g
Greece               Γ, δ, ξ, Φ                                  Γ, δ, ξ, Φ

When you compare it to the CONVERT with COLLATION method, you can find that this doesn’t have the problem with non existen characters. As the CONVERT converts the characters to particular character set which does not need to contain all the charecters of the source character set. We can see this e.g. on the Norwegian, and Greece characters.

If we use the CONVERT with COLLATION method we receive:

WITH Data AS (
    SELECT
        'Czech' AS [Language]
        ,N'á, é, í, ó, ú, ý, č, ď, ě, ň, ř, š, ť, ž, ů' AS [AccentChars] UNION ALL
    SELECT
        'Norwegian' AS [Language]
        ,N'ø' AS [AccentChars] UNION ALL
    SELECT
        'Estonian' AS [Language]
        ,N'õ' AS [AccentChars] UNION ALL
    SELECT
        'French' AS [Language]
        ,N'à, â, ç, é, è, ê, ë, î, ï, ô, ù, û, ü' AS [AccentChars] UNION ALL
    SELECT
        'Croatian, Vietnamese' AS [Language]
        ,N'đ' AS [AccentChars] UNION ALL
    SELECT
        'Latvian' AS [Language]
        ,N'ā, ē, ī, ū, ļ, ķ, ņ, ģ' AS [AccentChars] UNION ALL
    SELECT
        'Hungarian' AS [Language]
        ,N'ö, ü, ő, ű' AS [AccentChars] UNION ALL
    SELECT
        'Polish' AS [Language]
        ,N'ą, ć, ę, ń, ó, ś, ź, ż' AS [AccentChars] UNION ALL
    SELECT
        'Romanian' AS [Language]
        ,N'ă, â, î, ș, ț' AS [AccentChars] UNION ALL
    SELECT
        'Slovak' AS [Language]
        ,N'ô, ť, ľ, ŕ, ĺ, ä' AS [AccentChars] UNION ALL
    SELECT
        'Spanish' AS [Language]
        ,N'ñ' AS [AccentChars] UNION ALL
    SELECT
        'Swedish' AS [Language]
        ,N'å' AS [AccentChars] UNION ALL
    SELECT
        'Turkish' AS [Language]
        ,N'ç, ş, ğ' AS [AccentChars] UNION ALL
    SELECT
        'Greece' AS [Language]
        ,N'Γ, δ, ξ, Φ' AS [AccentChars]
)
SELECT
    [Language]
    ,[AccentChars]
    ,CONVERT(varchar(50), [AccentChars]) COLLATE Cyrillic_General_CI_AI AS [RemovedAccent]
FROM Data

Results:

Language             AccentChars                                 RemovedAccent
-------------------- ------------------------------------------- --------------------------------------------
Czech                á, é, í, ó, ú, ý, č, ď, ě, ň, ř, š, ť, ž, ů a, e, i, o, u, y, c, d, e, n, r, s, t, z, u
Norwegian            ø                                           o
Estonian             õ                                           o
French               à, â, ç, é, è, ê, ë, î, ï, ô, ù, û, ü       a, a, c, e, e, e, e, i, i, o, u, u, u
Croatian, Vietnamese đ                                           d
Latvian              ā, ē, ī, ū, ļ, ķ, ņ, ģ                      a, e, i, u, l, k, n, g
Hungarian            ö, ü, ő, ű                                  o, u, o, u
Polish               ą, ć, ę, ń, ó, ś, ź, ż                      a, c, e, n, o, s, z, z
Romanian             ă, â, î, ș, ț                               a, a, i, ?, ?
Slovak               ô, ť, ľ, ŕ, ĺ, ä                            o, t, l, r, l, a
Spanish              ñ                                           n
Swedish              å                                           a
Turkish              ç, ş, ğ                                     c, s, g
Greece               Γ, δ, ξ, Φ                                  ?, ?, ?, ?

From above we can see, that the CLR solution provides much better results and you do not loose any characters by conversion to non Unicode character set

If you are interested, you can grab sample C# project here: PPSqlClrSafe_AccentRemoval,zip

Any comments are welcomed.

Jan 242012
 

This article represents a custom components for transformation of columns inside SSIS Data Flow into XML.

If you are writing ETL processes using SSIS, you many times comes to situation, when you need log errors or other problems which can occur during ETL process. An example can be a failure of data conversion during the ETL process. In that situation you mostly would like to know that record exactly caused that error and log such conversion errors into particular error table.

Problem occurs when you have a lot of tables and want to see what data were converted and caused that particular error. As SSIS requires that the sources and destinations needs to be strongly mapped and you have different sources with different structures, this leads to a situation where you have a lot of different error tables – generally one per the source structure at which an error can occur.

This situation can be easily solved by a custom Data Flow component which will convert all source columns into an XML. In that situation you can only have one Error Table, which will contain e.g. Error Code, Error Column, Description and an XML Field in which will be all the data of particular row which caused error encoded in XML Form. This rapidly simplifies error logging and also error log monitoring as you need to monitor only a single table.

So the SSIS Data Flow with the Custom Columns to XML Transformation could look like the one below.

Columns To Xml Data FLow

ColumnsToXML PropertiesColumnsToXML Input Columns

You can provide a SourceID and SourceName properties which are then part of the produced XML so you can easily identify the source. In the Input Columns you select columns you want to be written to the XML. You can also specify an OutpuAlias and the the column names written into the XML will be the one specified in the OutputAlias property.

The Log written using that component and the XML produced will look like the ones below.

ColumnsToXML Log

<row sourceID="1" sourceName="Test Data Conversion">
  <Column name="Name" id="235" lineageId="72">Joseph</Column>
  <Column name="ValueInt" id="236" lineageId="75">2</Column>
  <Column name="ValueDate" id="237" lineageId="78">2012/01/35</Column>
</row>

Using the .value method of XML data type we can easily querying necessary information from the log table for whatever source in the table. So for our sample scenario we would like to query the Name, ValueInt and ValueDate columns stored as XML. This can be easily achieved by below SQL query:

SELECT
    [RowID]
    ,[ErrorCode]
    ,[ErrorColumn]
    ,[ColumnsXMLData].value('data((/row/Column[@name="Name"])[1])', 'varchar(128)') AS Name
    ,[ColumnsXMLData].value('data((/row/Column[@name="ValueInt"])[1])', 'varchar(10)') AS ValueInt
    ,[ColumnsXMLData].value('data((/row/Column[@name="ValueDate"])[1])', 'varchar(10)') AS ValueDate
FROM [TestDB].[dbo].[ETLErrorLog]
WHERE [ColumnsXMLData].value('data((/row/@sourceID)[1])', 'int') = 1

As you can see from the samples above, you can create easily create single Error table for whatever ETL logging and simplify the logging rapidly as you#160; do not need to create the error tables with separate structures for each source.

ColumnsToXML Implementation

The ColumnsToXmlTransform is a custom implementation of the PipelineComponent. It is developed in C# for .NET framework 3.5 and uses Linq to XML for creating the XML output.

The component processes selected input columns, generates the XML and stores it in the output column called ColumnsXMLData.

Except the ColumsXMLDataSourceID and SourceName) to allow you to identify the source row source once it is written as XML into the database and simplifies querying for data coming from particular sources as showed above.

Once the component is built, it is necessary to install it into the Global Assembly Cache and to the PipelineComponents folder in the SQL Server Integration services installation path on the machine on which the SSIS using this component will be running.

To install it into a GAC you can use the gacutil.exe which is part of the Windows SDK.

You can download sample project from my SkyDrive here: ColumnsToXml.zip. You can compile the code using Visual C# 2010 Express.

If you run the development environment as with elevated Administrator privileges, the project contains Post Build events, which will install the assembly into the GAC (it assumes you have Windows SDK 7.0 installed – if you have newer version, modify the path to the gacutil.exe accordingly) and also copies the assembly into the codePipelineComponents/code folder of the default SQL Server Integration services installation. If you installation directory differ, please modify the paths in Post Build events accordingly.

Once you have compiled the code, installed in GAC and copied to the PipelineComponents folder, you can include the component to the data flow items palette. Right-click on the palette and select Choose Items

Choose Data Flow Transformation Items Pop-upChoose SSIS Data Flow Items

Once you select the component, it will appear on the toolbox and you can start using it.

Columns To XML On Palette

Once I will have more time, I will try to create an installer for the component. Anyway, any comments are welcome.

Nov 302011
 

Last few months I was quite busy at work and didn’t have much time to write blog posts. Also I was preparing for Microsoft Certification exam.

Finally, last  Wednesday I’ve successfully passed the Exam 70-448: TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance and become MCTS.

So I hope I will have more free time now and be able to publish some of my posts I have in a queue.

Anyway I would like to start with preparation for the 70-452 certification exam to become MCITP.

Nov 302011
 

Finally I had a time to make a second update to my post Querying Active Directory on SQL Server using CLR. This update should finally resolve an issue when you need to return large numbers of properties from Active Directory.

The solution is very simple by adding a possibility to specify the columns lengths (maximum length) for the returned properties. All the structures in the code left the same as they were originally. The only change is, that the list of properties is no longer passed as a comma separated list and now semicolon is used to delimit the properties. Comma can be used to provide a column length. If the length is not provided then the default of 4000 is used as it was before.

So don’t hesitate and check the updated article.

I’ve also provided a link to an updated Visual C# project with all the codes.

May 252011
 

In my earlier post SQL Server 2005 and SQL Server 2008 Regular Expressions and pattern matching I was focusing on the Regular Expressions matching on the SQL Server using a CLR.

This post was focused only on the matches itself, but didn’t cover another great feature of the Regular Expressions and this feature is Replacements.

When using the replacements then first the regular expression is executed on the source string and then on one, several or all matches a replacement is applied and the result of such replacement is returned.

Here are the CLR functions which handle the replacements.

using System;
using System.Collections.Generic;
using System.Text;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Text.RegularExpressions;
using System.Collections;

/// <summary>
/// Regular expressions for SQL
/// </summary>
public class SQLRegEx
{
    #region Internal structures
    private struct RegExRowReplace
    {
        /// <summary>
        /// Replacement table row
        /// </summary>
        /// <param name="matchID">ID of the match. ID = 0 represents a complete source string</param>
        /// <param name="match">
        /// String representing match. For ID=0 is NULL as it is equal to original string. Null is returnet to limit transferring the whore source string
        /// </param>
        /// <param name="result">Result of the replacement for the matchID</param>
        public RegExRowReplace(int matchID, SqlChars match, SqlChars result)
        {
            MatchID = matchID;
            Match = match;
            Result = result;
        }

        public int MatchID;
        public SqlChars Match;
        public SqlChars Result;
    }

    #endregion

    #region RegEx functions

    /// <summary>
    /// Applies Regular Expression on the Source string, takes apropriate match and aplies e replace on it.
    /// </summary>
    /// <param name="sourceString">Source string on which the regular expression should be applied</param>
    /// <param name="pattern">Regular Expression pattern</param>
    /// <param name="replacement">Replacement to be used on match</param>
    /// <param name="matchId">ID of the Match to be returned. 0 replaces all matches in the sourceString</param>
    /// <returns>Value of the Group from within a Match</returns>
    [SqlFunction(IsDeterministic = true)]
    public static SqlChars RegExReplace(SqlString sourceString, SqlString pattern, SqlString replacement, int matchId)
    {
        Match m = null;
        Regex r = new Regex(pattern.Value, RegexOptions.Compiled);

        if (matchId == 0)
        {
            return new SqlChars(r.Replace(sourceString.Value, replacement.Value));
        }
        if (matchId == 1)
        {
            m = r.Match(sourceString.Value);
        }
        else if (matchId > 1)
        {
            MatchCollection mc = r.Matches(sourceString.Value);
            m = mc != null && mc.Count > matchId - 1 ? mc[matchId - 1] : null;
        }

        return m != null ? new SqlChars(m.Result(replacement.Value)) : SqlChars.Null;
    }


    /// <summary>
    /// Applies Regular Expression o the Source strings and return all matches and final results after replacement
    /// </summary>
    /// <param name="sourceString">Source string on which the regular expression should be applied</param>
    /// <param name="pattern">Regular Expression pattern</param>
    /// <param name="replacement">Replacement to be used on matches</param>
    /// <returns>IEnumerable</returns>
    [SqlFunction(FillRowMethodName = "FillRegExRowReplace")]
    public static IEnumerable RegExMatchesReplace(SqlString sourceString, SqlString pattern, SqlString replacement)
    {
        Regex r = new Regex(pattern.Value, RegexOptions.Compiled);
        int matchId = 0;

        foreach (Match m in r.Matches(sourceString.Value))
        {
            yield return new RegExRowReplace(++matchId, new SqlChars(m.Value), new SqlChars(m.Result(replacement.Value)));
        }
    }

    #endregion

    #region FillRow methods

    public static void FillRegExRowReplace(object obj, out int matchID, out SqlChars match, out SqlChars result)
    {
        RegExRowReplace row = (RegExRowReplace)obj;
        matchID = row.MatchID;
        match = row.Match;
        result = row.Result;
    }

    #endregion
}

RegExReplace

This is a scalar CLR function which handles a Regular Expression replacement. It allows to replace a single match from within a string or even replace all matches inside the string if matchID = 0 is passed as an argument.

RegExMatchesReplace

In contrast to RegExReplace the RegExMatchesReplace is a CLR table valued function, which returns all the matches as separated rows and also replacements. It returns a table with three columns MatchID which represents ID of the match from within the original string, Match which represents the particular RegEx match from within the string and Result, which is result of the replacement done on the Match.

So once the class is compiled into Assembly, we can register the functions inside SQL Server database.

CREATE ASSEMBLY [SQLRegEx]
AUTHORIZATION [dbo]
FROM 'C:CLRSQLRegEx.dll'
WITH PERMISSION_SET = SAFE

CREATE FUNCTION [dbo].[fn_RegExReplace](
    @sourceString [nvarchar](max),        --Source string to apply RegEx
    @pattern [nvarchar](4000),            --RegEx pattern to apply on the source string
    @replacement [nvarchar](400),        --Replacement pattern to apply on match
    @matchID [int] = 1                    --ID of the match to return. @matchID = 1 is the first match. IF @matchID=0 then all the patterns in the @sourceString are replaced
)        
RETURNS [nvarchar](4000)
WITH EXECUTE AS CALLER
AS 
EXTERNAL NAME [SQLRegEx].[SQLRegEx].[RegExReplace]
GO


CREATE FUNCTION [dbo].[fn_RegExMatchesReplace](
    @sourceString [nvarchar](max),  --Source string to apply RegEx
    @pattern [nvarchar](4000),        --RegEx to apply on the source string
    @replacement [nvarchar](4000)    --Replacement pattern to apply on matches
)        
RETURNS  TABLE (
    [matchId] [int] NULL,                --ID of returnet match
    [match] [nvarchar](4000) NULL,        --value of match
    [result] [nvarchar](4000) NULL        --Replacement result
) WITH EXECUTE AS CALLER
AS 
EXTERNAL NAME [SQLRegEx].[SQLRegEx].[RegExMatchesReplace]
GO

Testing the Replace method

Here are some sample queries so you can imagine how the functions work.

Replace year in all Dates with year 2011
WITH StringDates AS (
    SELECT '2010/09/3' AS StringDate UNION ALL
    SELECT '2010/07/09' UNION ALL
    SELECT '2009/1/15'
)
SELECT
    StringDate,
    dbo.fn_RegExReplace(StringDate, '(d{4})/(d{1,2})/(d{1,2})', '2011/$2/$3', 0)
FROM StringDates
Remove the values from the Key=Value pairs and leave only the Key values separated by semicolons
SELECT dbo.fn_RegExReplace(     'Key1=Val1;Key2=Val2;Key3=Val3;Key4=Val4',     's*?(w+?)s*?=s*?(.*?)(;|$)',     '$1$3',     0)
Get list of all dates from within a string and replace the year of each date by 2011
SELECT
*
FROM dbo.fn_RegExMatchesReplace(
    '2010/09/3;2010/07/09;2009/1/15', 
    '(d{4})/(d{1,2})/(d{1,2})', 
    '2011/$2/$3'
    )

From the examples we can see, that CLR give us a real power to using Regular Expressions on SQL Server. It allows us to make some complex transformation which could not be possible to do using a pure T-SQL and in case of transformation which can be done by pure T-SQL, mostly the CLR RegEx solution is much faster and straightforward.

Apr 212011
 

I’ve just posted an update to my article Querying Active Directory on SQL Server using CLR. I’ve added another 2 parameters to the samples which one of them allows control the Page Size limit for paged AD Search and the second one allows limit the number of rows returned to the client. This can be handful when the AD query can return more results than we need. In such case it’s easy to limit the number of rows.

I’ve updated the article after the comment from Chris, when he received an insufficient memory error when he wanted to return a higher number of fields from AD.

In this case the lowering the page size from 1000 to lower values should avoid this kind of problems.

Apr 132011
 

In my previous article Querying Active Directory on SQL Server using T-SQL I was showing the possibilities of querying the Active Directory using T-SQL, specifically using linked server and OPENQUERY statement or without linked server using the OPENROWSET statement.

This is an updated article and includes modification coming upon some of the comments to resolve some issues with large number of returned AD properties.

There ware mentioned some limitations of querying the AD using T-SQL and CLR will help us to bypass those limitations. Especially the limit of 1000 records returned as we can use pagination in the CLR code effectively.

For the purpose I’m creating a CLR Stored Procedure which will take several argument and return a result set. The reason I’m going to use a CLR Stored Procedure is, that stored procedure will allow me to return dynamic result set. It means I can specify properties of AD to return and those will be returned as columns of the result set. In case we go through the CLR Table Valued function, we had to create a separate function for each properties combination we would like to return.

So here is the CLR class for querying AD.

public class ActiveDirectory
{
    /// <summary>
    /// Queries Active directory according provided parameters
    /// Current user credentials are used for authentication
    /// </summary>
    /// <param name="adRoot">AD Root for querying AD</param>
    /// <param name="filter">Filter to be used for querying</param>
    /// <param name="searchScope">Scope to be used for queryingg</param>
    /// <param name="propertiesToLoad">List of properties to return</param>
    /// <param name="pageSize">Represents a PageSise for the paged search of AD</param>
    /// <param name="rowsLimit">Rrepresent limit for numbers of rows returned. NULL or value less than 1 represents unlimited</param>
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void QueryAD(string adRoot, string filter, string propertiesToLoad, string searchScope, int pageSize, SqlInt32 rowsLimit)
    {
        SearchAD(null, null, null, adRoot, filter, searchScope, propertiesToLoad, pageSize, rowsLimit);
    }

    /// <summary>
    /// Queries Active directory according provided parameters
    /// </summary>
    /// <param name="userName">UserName to be used to authenticate AD</param>
    /// <param name="password">Password to be used to authenticate to AD</param>
    /// <param name="adRoot">AD Root for querying AD</param>
    /// <param name="filter">Filter to be used for querying</param>
    /// <param name="searchScope">Scope to be used for queryingg</param>
    /// <param name="propertiesToLoad">List of properties to return</param>
    /// <param name="pageSize">Represents a PageSise for the paged search of AD</param>
    /// <param name="rowsLimit">Rrepresent limit for numbers of rows returned. NULL or value less than 1 represents unlimited</param>
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void QueryADUName(string userName, string password, string adRoot, string filter, string propertiesToLoad, string searchScope, int pageSize, SqlInt32 rowsLimit)
    {
        SearchAD(userName, password, null, adRoot, filter, searchScope, propertiesToLoad, pageSize, rowsLimit);
    }

    /// <summary>
    /// Queries Active directory according provided parameters
    /// </summary>
    /// <param name="userName">UserName to be used to authenticate AD</param>
    /// <param name="password">Password to be used to authenticate to AD</param>
    /// <param name="authType">Authentication type to be used to authenticate to AD</param>
    /// <param name="adRoot">AD Root for querying AD</param>
    /// <param name="filter">Filter to be used for querying</param>
    /// <param name="searchScope">Scope to be used for queryingg</param>
    /// <param name="propertiesToLoad">List of properties to return</param>
    /// <param name="pageSize">Represents a PageSise for the paged search of AD</param>
    /// <param name="rowsLimit">Rrepresent limit for numbers of rows returned. NULL or value less than 1 represents unlimited</param>
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void QueryADAuth(string userName, string password, string authType, string adRoot, string filter, string propertiesToLoad, string searchScope, int pageSize, SqlInt32 rowsLimit)
    {
        SearchAD(userName, password, authType, adRoot, filter, searchScope, propertiesToLoad, pageSize, rowsLimit);
    }

    private static DirectoryEntry GetRootEntry(string adRoot, string userName, string password, string authType)
    {
        if (userName == null)
            return new DirectoryEntry(adRoot);
        else if (authType == null)
            return new DirectoryEntry(adRoot, userName, password);
        else
        {
            AuthenticationTypes at;
            if (TryParseEnum<AuthenticationTypes>(authType, true, out at))
                return new DirectoryEntry(adRoot, userName, password, at);
            else
                throw new System.InvalidCastException(string.Format("authType must be one of '{0}'", GetEnumNames<AuthenticationTypes>()));
        }
    }

    private static string GetEnumNames<T>()
    {
        string[] names = Enum.GetNames(typeof(T));
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < names.Length; i++)
        {
            if (i > 0)
                sb.Append(',');
            sb.Append(names[i]);
        }
        return sb.ToString();
    }

    private static bool TryParseEnum<T>(string value, bool ignoreCase, out T outEnum)
    {
        try
        {
            outEnum = (T)Enum.Parse(typeof(T), value, ignoreCase);
            return true;
        }
        catch
        {
            outEnum = (T)Enum.GetValues(typeof(T)).GetValue(0);
            return false;
        }
    }

    /// <summary>
    /// Searches Active Directory according provided parameters
    /// </summary>
    /// <param name="userName">UserName to be used to authenticate AD</param>
    /// <param name="password">Password to be used to authenticate to AD</param>
    /// <param name="authType">Authentication type to be used to authenticate to AD</param>
    /// <param name="adRoot">AD Root for querying AD</param>
    /// <param name="filter">Filter to be used for querying</param>
    /// <param name="searchScope">Scope to be used for queryingg</param>
    /// <param name="propertiesToLoad">List of properties to return</param>
    /// <param name="pageSize">Represents a PageSise for the paged search of AD</param>
    /// <param name="rowsLimit">Rrepresent limit for numbers of rows returned. NULL or value less than 1 represents unlimited</param>
    private static void SearchAD(string userName, string password, string authType, string adRoot, string filter, string searchScope, string propertiesToLoad, int pageSize, SqlInt32 rowsLimit)
    {
        string[] properties = propertiesToLoad.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
        SqlMetaData[] recordMetaData = new SqlMetaData[properties.Length];
        SearchScope scope;

        int limit = rowsLimit.IsNull ? 0 : rowsLimit.Value;
        int rowsCount = 0;

        if (rowsLimit > 0 && pageSize > limit)
            pageSize = limit;

        if (!TryParseEnum<SearchScope>(searchScope, true, out scope))
            throw new System.InvalidCastException(string.Format("searchScope must be one of '{0}'", GetEnumNames<SearchScope>()));

        //Trim properties and prepare result set metadata, also process specified lengths
        for (int i = 0; i < properties.Length; i++)
        {
            string[] propDetails = properties[i].Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
            string propName = propDetails[0].Trim();
            int len;

            if (propDetails.Length < 2 || !int.TryParse(propDetails[1], out len) || len < 1 || len > 4000)
                len = 4000;

            recordMetaData[i] = new SqlMetaData(properties[i], System.Data.SqlDbType.NVarChar, len);
            properties[i] = propName;
        }

        //Get Root Directory Entry
        using (DirectoryEntry rootEntry = GetRootEntry(adRoot, userName, password, authType))
        {
            //Create a directory searcher with aproperiate filter, properties and search scope
            using (DirectorySearcher ds = new DirectorySearcher(rootEntry, filter, properties, scope))
            {
                ds.PageSize = pageSize; //Set Page Size - without this we will not do a paged search and we will be limiited to 1000 results

                //find all object from the rood, according the filter and search scope
                using (SearchResultCollection results = ds.FindAll())
                {
                    SqlDataRecord record = new SqlDataRecord(recordMetaData);
                    //Start pushing of records to client
                    SqlContext.Pipe.SendResultsStart(record);

                    foreach (SearchResult result in results)
                    {
                        record = new SqlDataRecord(recordMetaData);

                        for (int i = 0; i < properties.Length; i++)
                        {
                            ResultPropertyValueCollection props = result.Properties[properties[i]];

                            if (props.Count == 1)           //if property collection contains single vallue, set the record field to that value
                                record.SetSqlString(i, props[0].ToString());
                            else if (props.Count == 0)      //if property collection doesn't contain any value, set record field to NULL
                                record.SetSqlString(i, SqlString.Null);
                            else                            //In case of multiple value, separate the values by commas
                            {
                                StringBuilder sb = new StringBuilder();
                                bool firstItem = true;
                                foreach (object prop in props)
                                {
                                    if (!firstItem)
                                        sb.Append(',');
                                    else
                                        firstItem = false;

                                    sb.Append(prop.ToString());
                                }

                                record.SetSqlString(i, sb.ToString());
                            }
                        }

                        //send record to client
                        SqlContext.Pipe.SendResultsRow(record);

                        //if rowsLimit was reached, break the loop
                        if (++rowsCount == rowsLimit)
                            break;
                    }

                    //stop sending records to client
                    SqlContext.Pipe.SendResultsEnd();
                }
            }
        }
    }
}

The class contains three public methods (CLR Stored Procedures) QueryAD, QueryADUName and QueryADAuth. The first one will query AD using current user credentials and default authentication method, second will query AD using provided user credentials and default authentication method and in the third one we can specify also authentication method.

Those methods call a private method SearchAD, which takes care about the AD Searching and return the returns the result to client.

After the comment from Chris, I’ve updated my sample codes to have also the optional parameter pageSize which allow reduce the size of a Page for Paged Search used for querying the AD and avoid insufficient memory problems when querying higher amount of AD attributes.

I’ve also added a parameter rowsLimit which allows limit the maximum number of rows returned, which can be useful especially when querying very large AD. There is added a break into the loop which iterates the results from AD when the number of precessed imtes reach the limit.

UPDATE:

Finally I’ve also updated the private static void SearchAD method so now you pass the properties not as a comma separated list but a semicolon separated list. You can use a comma to specify the return length of each property. If the length is not specified, then the method will use a default 4000 characters length. This update should finally solve the issues with large number of properties returned when previously all were returned as nvarchar(4000)

The updated part of the code is the for loop which is processing properties and generating result set metadata, starting with the comment //Trim properties and prepare result set metadata, also process specified lengths

END OF UPDATE:

To be able to compile the code for example using Visual C# Express it is necessary to add reference to the System.DirectoryServices assembly in the project.

Once we compile the code and create say ADServices.dll assembly, we can register that assembly and CLR Stored procedures in our database.

Because the ActiveDirectory class is using System.DirectoryServices assembly, we will have to use UNSAFE PERMISSION_SET for our assembly and so the database using that assembly has to be TRUSTWORTHY.

ALTER DATABASE TestDB3 SET TRUSTWORTHY ON WITH ROLLBACK IMMEDIATE;

As we are using the System.DirectoryServices assembly, we have to register it in our database prior registering our assembly, otherwise we will not be able to register it.

--Register the NetFramework System.DirectoryServices assembly
CREATE ASSEMBLY [System.DirectoryServices]
AUTHORIZATION [dbo]
FROM 'C:WindowsMicrosoft.NETFrameworkv2.0.50727System.DirectoryServices.dll'
WITH PERMISSION_SET = UNSAFE
GO
--Register our assembly with ActiveDirectory CLR Stored Procedures
CREATE ASSEMBLY [ADServices]
AUTHORIZATION [dbo]
FROM 'C:CLRADServices.dll'
WITH PERMISSION_SET = UNSAFE
GO

Once we have successfully registered those assemblies, we can create the CLR Stored procedures in our DB.

--==========================================================================
-- @adRoot - root of searching eg. 'LDAP://OU=Sales,DC=Fabricam,DC=com'
-- @filter -  filter to be used for searching eg. '(&(objectCategory=group)'
-- @propertiesToLoad - list of properties to be retrieved eg. 'cn,50;ADsPath'
--                   - properties are separated by a semicolon and
--                   - and comma can be used to specify return length of the property
-- @searchScope - scope to be used for searching: {Base,OneLevel,Subtree}
-- @pageSize - specifies the PageSize for paged search - default is 1000
--               it is possible to lower the value if there is a problem 
--             retrieving such amount of records at once.
-- @rowsLimit - represents maximum number of rows returned.
--              NULL or value less than 1 represents unlimited
--==========================================================================
CREATE PROCEDURE usp_QueryAD
    @adRoot nvarchar(255),
    @filter nvarchar(255),
    @propertiesToLoad nvarchar(255),
    @searchScope nvarchar(8),
    @pageSize int = 1000,
    @rowsLimit int = 0
AS
EXTERNAL NAME [ADServices].[ActiveDirectory].[QueryAD]
GO
--==========================================================================
-- @userName - username to be used for authentificaiton to AD
-- @password - password to be used for authentification to AD
-- @adRoot - root of searching eg. 'LDAP://OU=Sales,DC=Fabricam,DC=com'
-- @filter -  filter to be used for searching eg. '(&(objectCategory=group)'
-- @propertiesToLoad - list of properties to be retrieved eg. 'cn,50;ADsPath'
--                   - properties are separated by a semicolon and
--                   - and comma can be used to specify return length of the property
-- @searchScope - scope to be used for searching: {Base,OneLevel,Subtree}
-- @pageSize - specifies the PageSize for paged search - default is 1000
--               it is possible to lower the value if there is a problem 
--             retrieving such amount of records at once.
-- @rowsLimit - represents maximum number of rows returned.
--              NULL or value less than 1 represents unlimited
--==========================================================================
CREATE PROCEDURE usp_QueryADUname
    @userName nvarchar(255),
    @password nvarchar(255),
    @adRoot nvarchar(255),
    @filter nvarchar(255),
    @propertiesToLoad nvarchar(255),
    @searchScope nvarchar(8),
    @pageSize int = 1000,
    @rowsLimit int = 0
AS
EXTERNAL NAME [ADServices].[ActiveDirectory].[QueryADUName]
GO
--==========================================================================
-- @userName - username to be used for authentificaiton to AD
-- @password - password to be used for authentification to AD
-- @authType - Authentification Type to be used for AD Authentification
--             {None,Secure,Encryption,SecureSocketsLayer,ReadonlyServer,Anonymous,
--             FastBind,Signing,Sealing,Delegation,ServerBind}
-- @adRoot - root of searching eg. 'LDAP://OU=Sales,DC=Fabricam,DC=com'
-- @filter -  filter to be used for searching eg. '(&(objectCategory=group)'
-- @propertiesToLoad - list of properties to be retrieved eg. 'cn,50;ADsPath'
--                   - properties are separated by a semicolon and
--                   - and comma can be used to specify return length of the property
-- @searchScope - scope to be used for searching: {Base,OneLevel,Subtree}
-- @pageSize - specifies the PageSize for paged search - default is 1000
--               it is possible to lower the value if there is a problem 
--             retrieving such amount of records at once.
-- @rowsLimit - represents maximum number of rows returned.
--              NULL or value less than 1 represents unlimited
--==========================================================================
CREATE PROCEDURE usp_QueryADAuth
    @userName nvarchar(255),
    @password nvarchar(255),
    @authType nvarchar(20),
    @adRoot nvarchar(255),
    @filter nvarchar(255),
    @propertiesToLoad nvarchar(255),
    @searchScope nvarchar(8),
    @pageSize int = 1000,
    @rowsLimit int = 0
AS
EXTERNAL NAME [ADServices].[ActiveDirectory].[QueryADAuth]
GO

After successful creation of the stored procedures we  can start querying the AD: The filter parameter is a LDAP filter according the Search Filter Syntax.

--Query AD Groups
--Result set will contain cn - common name, ADsPath
EXECUTE [usp_QueryAD] 
   @adRoot = 'LDAP://DC=Fabricam,DC=com'
  ,@filter = '(&(objectCategory=group))'
  ,@propertiesToLoad = 'cn,50;ADsPath'
  ,@searchScope = 'subtree'
GO

--Query users of Sales organization unit
--Result set will containt sn - surname, cn - common name and ADsPath
EXECUTE [usp_QueryAD] 
   @adRoot = 'LDAP://OU=Sales,DC=Fabricam,DC=com'
  ,@filter = '(&(objectCategory=user)(objectClass=user))'
  ,@propertiesToLoad = 'sn,cn,ADsPath'
  ,@searchScope = 'subtree'
GO

--Query al users belonging to Group "Sales Representatives" and providing user name and password 
--for AD Authetication including authetication type
--Result set will containt sn - surname, cn - common name and ADsPath
EXECUTE [TestDB3].[dbo].[usp_QueryADAuth] 
   @userName = 'FabricamADUserName'
  ,@password = 'ADuserNamePassword'
  ,@authType = 'secure'
  ,@adRoot = 'LDAP://DC=Fabricam,DC=com'
  ,@filter = '(&(objectCategory=user)(objectClass=user)(memberOf=CN=Sales Representatives,OU=Security Groups,OU=Groups,DC=Fabricam,DC=com))'
  ,@propertiesToLoad = 'sn,100;cn,200;ADsPath,1000'
  ,@searchScope = 'subtree'
GO

As we use a stored procedure for querying AD, then we cannot work directly with the result further (OK.. On Denali it will be possible thanks to the EXECUTE WITH RESULT SETS. On SQL Server 2005 and 2008 we could store the results e.g. to table variable to temp table and then work with the results as normally. From the CLR code we can see, that the result set contains all the AD properties we have passed as parameter and the order is exactly the same as in the input parameter.  The data type of each returned column is nvarchar with length which was specified in the properties list. If no length was specified or the length was less than one or grater than 4000 then the return type is nvarchar(4000).

--Query All users from AD, store them in a table variable and then
--select all users with their cn (common name) starting by "C"
DECLARE @result TABLE (
    sn nvarchar(100),
    cn nvarchar(200),
    ADsPath nvarchar(1000)
)

INSERT INTO @result
EXECUTE [TestDB3].[dbo].[usp_QueryADAuth] 
   @userName = 'FabricamADUserName'
  ,@password = 'ADuserNamePassword'
  ,@authType = 'secure'
  ,@adRoot = 'LDAP://DC=Fabricam,DC=com'
  ,@filter = '(&(objectCategory=user)(objectClass=user))'
  ,@propertiesToLoad = 'sn,100;cn,200;ADsPath,1000'
  ,@searchScope = 'subtree'

SELECT 
*
FROM @result
WHERE cn LIKE 'C%'

From the examples above we can see, that once we create CLR stored procedures for querying the AD, the queries to AD are quite easy. Comparing the the T-SQL and Linked Server or OPENROWSET solution we have much greater possibilities and what is most important, we are not limited to 1000 results from our query, so we can easily query all the object in AD.

If you are interested, you can download sample ADServices Visual C# 2010 Express project. It contains all the latest updates I have mentioned.

Here you can download an updated sample PPSqlDirectoryServices Visual C# 2010 project with all the lasted updates implemented.