It’s [Still!] the SQL Injection… Stupid

Did you see Imperva’s October 2012 Hacker Intelligence Report? The report is a data mining study directed toward the on-line forum behaviors among a purportedly representative group of hackers. The milestone for October 2012 is that Imperva now has a year’s worth of data upon which to report. Almost a half a million threads in the mine. In this study, keyword analysis found SQL injection sharing the top of the heap with DDOS in terms of what hackers talk about in the forums. In the wake of Powershell 3.0 – the study also identifies shell code as the big up-and-comer for mentions in the hacker threads. Only 11% of the forum posts mention “brute force”. “Brute force” being the only topical category Imperva charted with a direct relationship to cryptography.

The absence of an aggregate specifically dedicated to cryptography or encryption strongly suggest the hackers are not talking much about cryptography. Hmmmm.

Keyword frequency in 439,587 forum threads:

  1. SQL injection 19%
  2. dDos 19%
  3. shell code 15%
  4. spam 14%
  5. XXS 12%
  6. brute force 11%
  7. HTML injection 9%

The report also cites disturbing data from a 2012 Gartner study, “Worldwide Spending on Security by Technology Segment, Country and Region, 2010-2016”. The Gartner study purportedly finds that less than 5% of money spent on computing security products buys products that are useful against SQL injection.

Holy octopus slippers! The statistics definitely make a good case for taking a look at Imperva’s products. Even if there is some marketing bias in the study – I don’t think so, but saying even if there is – the findings are more bad news for data security. Whats worse is we seem to be headed in the wrong direction.  Consider:

  • The SQL Server 2008 Books Online had a page of SQL injection prevention best practices that is removed from the SQL Server 2012 edition.
  • SQL injection prevention guidance is largely unchanged for many years yet is not widely followed. Input validation is the key. Judging by the hacker interest in SQL injection, adoption of the guidance must be low.
  • Hackmageddon.com’s Cyber Attack Statistics indicate that SQL injection is found in over 25% of the hacker attacks documented during October 2012.
  • Cloud host FireHost – a WMWare base web host with an impressive security claim and a data centers in Phoenix, Dallas, London and Amsterdam –  reported that attacks the Firehost infrastructure had detected and defend saw a 69% spike in SQL injection attacks in second quarter 2012 ( and then cross site scripting surged in just ended Q3).

What is stopping us from truly going after this problem? Input validation is not hard and need not be a bottleneck. Threat detection is supposedly a part of any ACE/PCI/FIPS/HIPPA  compliant system. Detection avoidance is well understood. Nonetheless, and for reason$ far beyond reasonable, the strategic emphasis remains stuck on compliance. That would be OK if the standards of compliance were even close to adequate. Clearly they are not. That proof is in the pudding.

There are SQL injection detection and avoidance products out there that work. Many many products. Just to name a few – and by way of describing the types of tools that forward thinking organizations are already buying in an effort to eradicate SQL injection:

Application Delivery Network (ADN)

  • Citrix NetScaler
  • F5s BigIP
  • Fortinet  FortiGate

Web Application Firewall (WAF)

  • Applicature dotDefender WAF
  • Cisco ACE Web Application Firwewall
  • Imperva Web Application Firewall & Cloud WAF
  • Barracuda Networks Web Application Firewall
  • armorize’s SmartWAF (Web server host based)

Web Vulnerability Scanners (WVS)

  • sqlmap (free)
  • Acunetix WVS

Unified Threat Management (UTM)

  • Checkpoint UTM-1 Total Security Appliance
  • Saphos UTM Web Server Protection
  • Watchguard XTM

These products and the overarching practice changes needed to implement them show success in going after the problem. But, as the Gartner study shows, nobody seems to be buying it.

There are also cloudy platform hosts and ISPs like FireHost that handle the WAF for organizations that cannot justify the capital costs and FTEs required to do the job right in-house due to scale.

Ever so slowly the major hosts are imposing WAF upon all tenants. Another n years at the current snail’s pace and the security improvement just might be noticeable. Seems to me like “do it and and do it now” is the only choice that can reverse a ridiculous situation that has gone on too long already. Even secure hosts prioritize profitability over basic security. That is rent seeking.

Any web presence that takes another tact is telegraphing priorities that violate the duty to protect that which is borrowed under an explicit promise of confidentiality or generally accepted fiduciary performance levels equivalent to all other financial and intellectual assets of that organization. Few meet the sniff test. Many remain highly profitable. Just take a look in your Facebook mirror. Customer’s and consumers have no visibility into how any organization will honor this responsibility nor recourse when that duty is shirked. The metrics above make clear that poor security practices shirk the responsibility and carelessly feed the identity theft racket. It must end. Organizations that perpetuate this status quo and remain profitable are complicit in an ongoing crime of  against humanity. IT staff who are fairly characterized as “team players” or leaders in such organizations are every bit as culpable as the soldiers of Auschwitcz or My Lai or the owner of a piano with Ivory keys.

Organizations private and public have a fundamental obligation to protect customers, clients and citizens from illegal exploitation. What in the world makes it OK to exclude chronic identity theft violations from that responsibility?

Even when the data center budget includes one of the more robust solutions; to have done the needful in terms of basic input validation, code/user authentication and principal-of-least-privilege access rights is essential for any good defense-in-depth security strategy.

Consider this T-SQL function from the Encryption Hierarchy Administration schema that implements a passphrase hardness test based upon the SQL injection prevention guidance in the SQL 2008 BOL.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: password/passphrase gauntlet
4   --    phrases are frequently used in dynamic SQL so SQL Injection is risk
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckPhrase 
7     ( @tvp AS NAMEVALUETYPE READONLY )
8   RETURNS @metatvp TABLE 
9     ( Status NVARCHAR (36)
10    , Signature VARBINARY (128) )
11  $(WITH_OPTIONS)
12  AS
13  BEGIN
14    DECLARE @Status NVARCHAR (36)
15          , @Name NVARCHAR(448)
16          , @UpValue NVARCHAR (128) 
17          , @Value NVARCHAR (128) ;
18    -- dft password policy as described in 2008R2 BOL + SQL Injection black list
19    -- fyi: SELECT CAST(NEWID() AS VARCHAR(128)) returns a valid password 
20    SET @Status = 'authenticity';
21    IF EXISTS ( SELECT *
22                FROM sys.certificates c
23                JOIN sys.crypt_properties cp
24                ON c.thumbprint = cp.thumbprint
25                CROSS JOIN sys.database_role_members r
26                WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
27                AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
28                AND c.name = '$(OBJECT_CERTIFICATE)'
29              AND c.pvt_key_encryption_type = 'PW'
30                AND cp.major_id = @@PROCID 
31                AND @@NESTLEVEL > 1 -- no direct exec of function 
32                AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
33                AND EXISTS ( SELECT * FROM sys.database_role_members 
34                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
35                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
36                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) )        
37      BEGIN
38        SET @Status = 'decode';
39        SET @Name = ( SELECT DECRYPTBYKEY( Name 
40                                         , 1
41                                         , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NVARCHAR (36) ) ) 
42        FROM @tvp );
43        SET @Value = ( SELECT DECRYPTBYKEY( Value, 1, @Name ) FROM @tvp );                    
44        IF PATINDEX('%.CONFIG', UPPER(@Name) )  -- no strength test, will fall through 
45         + PATINDEX('%.IDENTITY', UPPER(@Name) )             
46         + PATINDEX('%.PRIVATE', UPPER(@Name) ) 
47         + PATINDEX('%.SALT', UPPER(@Name) )           
48         + PATINDEX('%.SOURCE', UPPER(@Name) ) > 0       
49          SET @Status = 'OK';
50        ELSE
51          BEGIN
52            SET @UpValue = UPPER(@Value);
53            SET @Status = 'strength';
54            IF ( (    ( LEN(@Value) >= $(MIN_PHRASE_LENGTH) )   -- more is better
55                  AND ( PATINDEX('%[#,.;:]%'
56                      , @Value ) = 0 )   -- none of these symbols as recommended in BOL 
57                  AND ( SELECT CASE WHEN PATINDEX('%[A-Z]%'
58                                                  , @Value) > 0 
59                                    THEN 1 ELSE 0 END    -- has uppercase
60                              + CASE WHEN PATINDEX('%[a-z]%'
61                                                  , @Value) > 0 
62                                    THEN 1 ELSE 0 END    -- has lowercase  
63                              + CASE WHEN PATINDEX('%[0-9]%'
64                                                  , @Value) > 0 
65                                  THEN 1 ELSE 0 END    -- has number
66                              + CASE WHEN PATINDEX('%^[A-Z], ^[a-z], ^[0-9]%' -- has special
67                                                  , REPLACE( @Value,SPACE(1),'' ) ) 
68                       ) > 0  
69                                    THEN 1 ELSE 0 END ) > 2 ) )   -- at least 3 of 4
70              BEGIN 
71                -- black list is not so strong but can look for the obvious 
72                SET @Status = 'injection';                       
73                IF ( PATINDEX('%[__"'']%', @UpValue)   -- underscore (so no sp_ or xp_) or quotes
74                   + PATINDEX('%DROP%'   , @UpValue)   -- multi-character commands... 
75                   + PATINDEX('%ADD%'    , @UpValue)
76                   + PATINDEX('%CREATE%' , @UpValue)
77                   + PATINDEX('%SELECT%' , @UpValue)
78                   + PATINDEX('%INSERT%' , @UpValue)
79                   + PATINDEX('%UPDATE%' , @UpValue)
80                   + PATINDEX('%DELETE%' , @UpValue)
81                   + PATINDEX('%GRANT%'  , @UpValue)
82                   + PATINDEX('%REVOKE%' , @UpValue)
83                   + PATINDEX('%RUNAS%'  , @UpValue)
84                   + PATINDEX('%ALTER%'  , @UpValue)
85                   + PATINDEX('%EXEC%'   , @UpValue)
86                   + PATINDEX('%--%'     , @Value)     -- comments...
87                   + PATINDEX('%**/%'    , @Value) 
88                   + PATINDEX('%/**%'    , @Value)  = 0 )
89                  BEGIN 
90                    SET @Status = 'duplicate';
91                    IF NOT EXISTS ( SELECT *                  -- not already used  
92                                    FROM $(EHA_SCHEMA).$(NAMEVALUES_TABLE) n
93                                    WHERE ValueBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
94                                                                              , '$(EHA_SCHEMA)'
95                                                                              , '$(NAMEVALUES_TABLE)'
96                                                                              , 'ValueBucket' 
97                                                                              , @Value)
98                                    AND CAST(DecryptByKey( n.Value -- should be rare
99                                                          , 1
100                                                         , @Name ) AS NVARCHAR (128) )  =  @Value )  
101                     SET @Status = 'OK';
102                 END
103             END
104          END
105     END
106   INSERT @metatvp
107     ( Status
108     , Signature ) 
109   VALUES 
110     ( @Status
111    , SignByCert( CERT_ID('$(AUTHENTICITY_CERTIFICATE)'), @Status ) );
112   RETURN;
113 END
114 GO
115 ADD SIGNATURE TO $(EHA_SCHEMA).CheckPhrase 
116 BY CERTIFICATE $(OBJECT_CERTIFICATE)
117 WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
118 GO

SQL injection input validation is only part of what goes on here. The function accepts an already encrypted name value pair TVP as a parameter and returns a signed business rule validation result as a TVP.  To do so, first the schema and user authenticity are verified before the phrase is decoded and the SQL injection/detection rules are applied. Only if all rules are met will an IO be required to verify that the phrase has not already been used.

The bi-directional encoding of parameters with a private session scoped symmetric key helps to narrow the SQL injection threat vector even before the filter(s) can be applied. This means that the passed values have already successfully been used in a T-SQL ENCRYPTBYKEY command in the current database session. Not that encryption does anything to prevent or detect SQL injection. It is more that the first touch of any user input value carries higher risk. Likewise the first use of an input in any dynamic SQL  statement carries a higher risk. Always better to do something benign with user input before you risk rubbing it against your data.

In the process of validation, two black lists are used to filter punctuation (line 55) and specific character sequences (lines 73-88) frequently identified as injection markers.

Another function from the schema validates names for T-SQL Encryption Hierarchy key export files. In this function the black list filter that includes file system specific markers as identified in the same SQL Server 2008 R2 books Online article. The somewhat cumbersome PATINDEX() driven exclusion filter pattern is used in the file name function as is used for hardness testing.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: apply file naming rules and conventions
4   --    name not already in use and no identified sql injection
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckFile 
7     ( @Name VARBINARY (8000) )
8   RETURNS BIT
9   $(WITH_OPTIONS)
10  AS
11  BEGIN
12    RETURN (SELECT CASE WHEN  PATINDEX('%[#,.;:"'']%', Name) 
13                            + PATINDEX('%--%', Name)
14                            + PATINDEX('%*/%', Name)
15                            + PATINDEX('%/*%', Name)
16                            + PATINDEX('%DROP%', Name)
17                            + PATINDEX('%CREATE%', Name)
18                            + PATINDEX('%SELECT%', Name)
19                            + PATINDEX('%INSERT%', Name)
20                            + PATINDEX('%UPDATE%', Name)
21                            + PATINDEX('%DELETE%', Name)
22                            + PATINDEX('%GRANT%', Name)
23                            + PATINDEX('%ALTER%', Name) 
24                            + PATINDEX('%AUX%', Name) 
25                            + PATINDEX('%CLOCK$%', Name) 
26                            + PATINDEX('%COM[1-8]%', Name)
27                            + PATINDEX('%CON%', Name) 
28                            + PATINDEX('%LPT[1-8]%', Name) 
29                            + PATINDEX('%NUL%', Name) 
30                            + PATINDEX('%PRN%', Name) = 0
31                        AND NOT EXISTS 
32                ( SELECT COUNT(*) AS [Existing] 
33                  FROM $(EHA_SCHEMA).$(BACKUP_ACTIVITY_TABLE)
34                  WHERE BackupNameBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
35                                                                , '$(EHA_SCHEMA)'
36                                                                , '$(BACKUP_ACTIVITY_TABLE)'
37                                                                , 'BackupNameBucket' 
38                                                                , Name ) )    
39                        THEN 1 ELSE 0 END
40            FROM (SELECT CAST( DECRYPTBYKEY ( @Name ) AS NVARCHAR(448) ) AS Name  
42                  FROM sys.certificates c
42                  JOIN sys.crypt_properties cp
43                  ON c.thumbprint = cp.thumbprint
44                  CROSS JOIN sys.database_role_members r
45                  WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
46                  AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
47                  AND c.name = '$(OBJECT_CERTIFICATE)'
48                  AND c.pvt_key_encryption_type = 'PW'
49                  AND cp.major_id = @@PROCID 
50                  AND @@NESTLEVEL > 1 
51                  AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
52                  AND EXISTS (SELECT * FROM sys.database_role_members 
53                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
54                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
55                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) ) AS derived );
56  END
57  GO
58  ADD SIGNATURE TO $(EHA_SCHEMA).CheckFile 
59  BY CERTIFICATE $(OBJECT_CERTIFICATE)
60  WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
61  GO

When processing happens one string at a time this filter is of small performance concern. However, processing a large set against such a filter could be slow and disruptive. The objects are obfuscated into the database WITH ENCRYPTION so only those with elevated access, development environment access and source repository access are likely to be aware of the filter details. Most of the logic in the functions is to verify the authority and authenticity of the caller and the calling object.

These functions demonstrate that fundamental SQL injection protection is easily achievable even for an application with the crippled regular expression support of T-SQL. If performance or load is a service level issue, the CLR might be a better host for the functions. However, as with obfuscation, the most important place to validate against SQL injection is at the point where data enters the system. In some cases SQL injection protection done after the invalidated text moves inside SQL Server will be too late. Only when the user interface is a SQL command line could it be the best security choice to validate against SQL injection inside SQL Server. In scope, SQL injection prevention is an application layer exercise. That being said, skipping the SQL injection validation inside SQL Server is reckless. Robust security will employ layers of defense.

In my mind, the command line too must always be considered as an attack vector even if not a favorite SQL injection attack vector at this time and even if the application makes no use of the command line. As the metamorphosis of now soluble security landscape unfolds, the hackers will be shining their cyber-flashlights everywhere and chasing anything shiny. To discount that the command line will continue to get a good share of malicious attention as long as there are command lines and hackers is negligence and/or denial.

For the Encryption Hierarchy Administration schema that uses the functions above a Powershell deployment and administration interface is helpful to improve security. With a pure T-SQL approach there is always a risk of exposure of user input text of secrets in memory buffers before the input value can be ciphered by the database engine. Granted, it is a brief period of time and I am not even sure how one would go about sniffing SQLCMD command line input without running in a debugger or waiting for the input to move into database engine workspace. It surely must be available somewhere in memory. The scenario is a target. I know I have never check if the operating system is running a debugger in any T-SQL script I have ever written. This helps to illustrate why the best time and place to encrypt data is at the time it is generated in the place where it is generated or enters the system. Even then, 100% certainty will remain elusive if the system cannot be verified to be keylogger free.

The utility models a somewhat unusual scenario where encryption at the database is the right choice. Nonetheless, getting the many secrets required for the administration of encryption keys and key backups entered into the system presents a potential for exposure to memory mining hackers. Using SecureString input and SecureString based SMO methods to instantiate the database objects that need the secrets can eliminate much of that vulnerability. As you may know a SecureString is an .NET object .encrypted by a hash from the current user’s session at all times while in memory with user cleanup from memory that can be more secure than garbage collection. It is relatively easy for the user to decrypt the SecureString data on demand but doing so would result in sensitive information becoming available as clear text in memory registers where the un-encrypted copy is written. No other users have access to the encryption key.

  
function Decode-SecureString 
{   
    [CmdletBinding( PositionalBinding=$true )]
    [OutputType( [String] )]
    param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )]
            [System.Security.SecureString] $secureString )  
    begin 
    { $marshal = [System.Runtime.InteropServices.Marshal] }
    process 
    { $BSTR = $marshal::SecureStringToBSTR($secureString )
     $marshal::PtrToStringAuto($BSTR) } 
    end
    { $marshal::ZeroFreeBSTR($BSTR) }
}

Decode-EHSecureString $( ConvertTo-SecureString '1Qa@wSdE3$rFgT'  -AsPlainText -Force )

Powershell obfuscates Read-Host user input of type SecureString with the asterix (*) on the input screen. With the ISE you get a WPF input dialog that more clearly show the prompt but could also become annoying for command-line purists.

To evaluate the use of a Powershell installer, I coded a Powershell installation wrapper for the hub database installation scripts of the utility. The hub database needs 4 secrets: the passwords for four contained database users.  With this change it makes no sense to add an extra trip to the database to evaluate the hardness and SQL injection vulnerability for each secret. Instead, the SQL injection input validation logic from the T-SQL functions above it migrated to a Powershell Advanced Function – Advanced meaning that the function acts like a CmdLet – that accepts a SecureString.

  
1  function Test-EHSecureString 
2  {   
3   [CmdletBinding( PositionalBinding=$true )]
4      [OutputType( [Boolean] )]
5      param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )] 
6              [System.Security.SecureString] $secureString
7            , [Int32] $minLength = 14 
8            , [Int32] $minScore = 3 )  
9      begin 
10     { 
11         $marshal = [System.Runtime.InteropServices.Marshal] 
12     }
13     process 
14     {   # need the var to zero & free unencrypted copy of secret
15         [Int16] $score = 0
16         $BSTR = $marshal::SecureStringToBSTR($secureString)
17         if ( $marshal::PtrToStringAuto($BSTR).length -ge $minLength )
18         { 
19             switch -Regex ( $( $marshal::PtrToStringAuto($BSTR) ) )
20            {
21             '[#,.;:\\]+?' { Write-Warning ( 'character: {0}' -f $Matches[0] ); Break }
22             '(DROP|ADD|CREATE|SELECT|INSERT|UPDATE|DELETE|GRANT|REVOKE|RUNAS|ALTER)+?' 
23                           { Write-Warning ( 'SQL command: {0}' -f $Matches[0] ); Break }
24             '(AUX|CLOCK|COM[1-8]|CON|LPT[1-8]|NUL|PRN)+?' 
25                           { Write-Warning ( 'dos command: {0}' -f $Matches[0] ); Break } 
26             '(--|\*\/|\/\*)+?'{ Write-Warning ( 'comment: {0}' -f $Matches[0] ); Break }
27             '(?-i)[a-z]'  { $score+=1 }
28             '(?-i)[A-Z]'  { $score+=1 }
29             '\d+?'        { $score+=1 }
30             '\S\W+?'      { $score+=1 }
31             Default { Write-Warning $switch.current; Break }        
32            } 
33         }
34         else
35         { write-warning 
36                      ( 'length: {0}' -f $( $marshal::PtrToStringAuto($BSTR).length ) ) } 
37         write-warning ( 'score: {0}' -f $score )  
38         $( $score -ge $minScore )
39     }        
40     end { $marshal::ZeroFreeBSTR($BSTR) }
41 }

One thing for sure. Much less code required in Powershell than T-SQL to create a user. To securely invoke the function a Read-Host -AsSecureString prompts for user input that will go into the memory location allocated to the SecureString and only as encrypted data. Here the Powershell script will prompt for input until it gets an acceptable value. Remember, no one else will be able to decode this value in memory. Only the user that created the SecureString. Defense-in-Depth demands that care is be taken that the SecureString memory location is not taken for an off-line brute force interrogation.

 

do { $SQL_PASSWORD = $(Read-Host 'SQL_PASSWORD?' -AsSecureString ) } 
until ( $(Test-SecureStringHardness $HUB_ADMIN_PASSWORD ) )

Then the database is affected by the entered secret using SMO. In this case a user with password will be created in a contained database.

  
if ( $( Get-ChildItem -Name) -notcontains 'HubAdmin' )
{
    $HubAdmin = New-Object Microsoft.SqlServer.Management.Smo.User
    $HubAdmin.Parent = $smoHubDB
    $HubAdmin.Name = 'HubAdmin'     
    $HubAdmin.Create( $HUB_ADMIN_PASSWORD ) 
}

The Test-SecureString function does expose the clear text of the secret in the PSDebug trace stream during the switch operation as $switch. To my knowledge there is no way to obfuscate the value. On top of that the disposal of the $switch automatic variable is under garbage collection so there is no reliable way to know for sure when you can stop wondering is anyone found it. That uncertainty may be more of a risk than the SQLCMD exposure that the secure string is supposed to solve? On the other hand, the risk of exposure of the SQLCMD setvar values is undetermined so it would be silly to pretend to quantify an unknown risk. What I know for sure is those values have to touch primary storage buffers in order to load the variables and then populate the SQLCMD – $(VARIABLE_NAME) – tokens in the script. At least with the Powershell script I can quantify the risk and take all necessary precaution to mitigate. With SQLCMD setvar variables about all I can do to be certain my secrets are not fished out of the free pool or where ever else they might be buffered as clear text is remove the power. Even that is no guarantee the secrets are not leaked to a paging file, spooler or log while exposed internally or externally as clear text as the system shuts down.

At this point I’m convinced that it is best to validate Powershell SecureString input against SQL injection threats. The risk that someone will find the secrets in local memory and use them with malicious intent is far less than the risk from SQL injection in my estimation. I will continue to integrate Powershell into the install script with the goal of  using a Powershell installer. This is a much better input scenario than the T-SQL event obfuscation technique I had been using.

I will leave the SQL injection filters in the T-SQL functions to complement the Powershell filter for two reasons. Defense-in-depth and [sic]Defense-in-depth. 

Posted in Code Review, Data Loading, Secure Data, Testing | Leave a comment

TSQL Cryptographic Patterns – part 9: we’d better take this OFFLINE

There is a compelling defense-in-depth rationale for enabling AUTO_CLOSE on a database where sensitive data is stored.

ALTER DATABASE $(ANY_USER_DATABASE) SET AUTO_CLOSE ON;

When AUTO_CLOSE is ON the database will cleanly shutdown when the last active user session disconnects or moves to another database. To be cleanly shutdown means that a database can be opened again later without the need for SQL Server to run recovery on that database. Everything in the log has been processed into the data set. FWIW: we don’t get to decide when recovery runs, the database engine makes that determination. We do get to mess around with the CHECKPOINT a little more in SQL Server 2012 with the TARGET_RECOVERY_INTERVAL database option that overrides the server recovery interval. That actually does appears to be a step in the direction of exposing control of AUTO_CLOSE though probably not intentional.

Using AUTO_CLOSE is easy:

  • Once enable there is nothing to do.
  • The easiest way to tell if AUTO_CLOSE is ON is to query the is_auto_close_on column in sys.databases.
  • The easiest way to tell if a database with AUTO_CLOSE ON is cleanly shutdown is to query the is_cleanly_shutdown column in sys.databases.
  • The most eye-opening way to tell if the database is closed at the present time is to copy the .mdf or .ldf. If you can copy the files the database is cleanly shut down, if you cannot the database is open and accumulating resources i.e., data pages, locks, latches, versions, query plans, connections, etc..

(Note that there are a few respectable bloggers claiming that AUTO_CLOSE is marked for deprecation since SQL 2008. I believe there is some confusion. The blogs I have seen with this claim reference the SQL Server 2000 DMO AutoClose Property page as evidence. If you look, you will notice that all the DMO documentation pages for SQL Server 2000 carry the same deprecation warning. Pretty sure DMO is the deprecated technology not AUTO_CLOSE. I could be wrong.)

When a database cleanly shuts down all resources held for that database are freed from memory. Log records are processed such that no recovery is required when the database “opens”. Encryption keys and certificates are closed preventing any free rides on encryption hierarchies opened during legitimate use. DMV data collected from that database disappears. The file system locks on all log and data files are released. Data is flushed from cache. If TDE or file system encryption is in use, this moves all data behind that layer of obfuscation. The unloading is asynchronous, happening within 300ms.

The main difference between a database with AUTO_CLOSE ON when cleanly shutdown and an OFFLINE database is the AUTO part. That is, an administrator must manually transition the database between ONLINE and OFFLINE and back while AUTO_CLOSE automagically transitions the database between the unmodifiable state and the usable state for any valid request.

I notice that databases do not get the is_cleanly_shutdown bit set when the database is taken OFFLINE. While I cannot repro on demand, I also noticed that taking the test database ONLINE will force a recovery when that database goes back ONLINE every now and again. The documentation is clear that an OFFLINE database is cleanly shutdown. Wonder what’s up with that?


SELECT name, is_auto_close_on, state_desc, is_cleanly_shutdown
FROM sys.databases WHERE PATINDEX('Test%', name ) = 1;


name              is_auto_close_on state_desc  is_cleanly_shutdown
----------------- ---------------- ----------  ------------------- 
Test_OFFLINE                     0    OFFLINE                    0 
Test_AUTO_CLOSE                  1    ONLINE                     1 

The pooled connection overhead and bottlenecking that comes with AUTO_CLOSE are fairly well known. Most of the time that is about all one needs to know to avoid AUTO_CLOSE. The experts simply tell us to turn AUTO_CLOSE off and leave it at that. In fact, the best practice policies included in the SQL Server installation will disable AUTO_CLOSE on all databases.

Enabling the best practice policies is far better than not using policies or following the painful trajectory of trial and error to “find” the correct best practices. In all cases beware the dogma. A well-considered policy built upon best practices, patience and perststence is preferred.

Applications that create and store sensitive data are at risk of compromise if adequate considerations are not given to vulnerabilities that exploit the available SQL Server resource metadata and/or SQL Server primary storage buffers. The query cache, for example, can be helpful in understanding the data store and the data flow. This is useful information for man-in-the-middle, SQL Injection attackers or insider hi-jinx. Likewise, the sys.dm_exec_query_requests DMV or sys.sysprocesses compatibility view will point the uninitiated and uninvited to every client serviced by a database host. From there a SQL Injection attacker can map the application, identify weak hosts inside the DMZ and perhaps establish a SQL Injection based command line access targeting the weak internal node. The ways to be hacked are many.

The security implications of database resources are not normally considered in application design. If anything, database architectures error on the side of keeping resources loaded and exposed by making more memory available to the SQL Server. This increases the risks that cached data, data storage patterns, data flow patterns and cached query text can be mined for malicious purpose. To be sure, database resource exploits do not represent the low hanging fruit, but equally as certainly most of the low hanging fruit has by now been plucked. Nonetheless, within the context of a well-considered defense-in-depth data security architecture securing database resource access is essential. Presuming adequate system obfuscation of buffers in the free pool, releasing resources held in memory will provide a layer of protection against exploits of SQL Server memory space.

From another perspective: only if the storage location is secured and encrypted would it be wise to leverage AUTO_CLOSE as a security layer. Anyone with read access to the storage location can copy the database files when cleanly shutdown. An un-encrypted database file can also be opened in EMERGENCY mode (READONLY) on another SQL Server – illustrating the value of encrypted storage.

Applications with a relatively low rate of change and highly sensitive data, such as the Encryption Hierarchy Administration T-SQL utility that provided example T-SQL for this series and some witness protection relocation databases are candidates for the anti-sniffing and anti-hijacking protections afforded by resource unloading. Furthermore, when TDE is also configured and database resources are unloaded, the most complete benefit for TDE can be achieved. Under such conditions there are no back-doors or alternative access paths that can circumvent the encryption.

I decided to put it to a quick test. The test output below shows the resource caching behavior around AUTO_CLOSE’s clean shutdown and OFFLINE under 3 configuration scenarios:

  1. AUTO_CLOSE ON
  2. AUTO_CLOSE OFF
  3. AUTO_CLOSE ON with Active Service Broker

Connecting to BWUNDER-PC\ELEVEN...
Set database context to 'master'.
Microsoft SQL Server 2012 - 11.0.2100.60 (X64) 
    Feb 10 2012 19:39:15 
    Copyright (c) Microsoft Corporation
    Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> 
(Build 7601: Service Pack 1)

CREATE DATABASE TestDb
OPEN MASTER KEY
CREATE DATABASE ENCRYPTION KEY
Warning: The certificate ~snip~ has not been backed up ~snip~
Set database context to 'tempdb'.
CREATE FUNCTION tempdb.dbo.fnServerResources
CREATE PROCEDURE tempdb.dbo.CheckResourcesFromTempDB
Set database context to 'TestDb'.
OPEN MASTER KEY
CREATE PROCEDURE TestDb.dbo.CheckDbResources
 
#1  SET AUTO_CLOSE ON                 -- database resources ----------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        3         2         0         2
Changed database context to 'master'.
wait a second...                             0         2         1         0
cleanly shutdown                             0         0         0         0
Changed database context to 'TestDb'.
initiate OFFLINE                             1         2         0         2
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
#2  SET AUTO_CLOSE OFF                -- database resources ----------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        1         2         0         2
Changed database context to 'master'.
wait a second...                             0         2         1         0
not shutdown                                 0         2         1         0
Changed database context to 'TestDb'.
initiate OFFLINE                             1         2         1         2
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
 Configure Service Broker
CREATE PROCEDURE TestDb.dbo.TestQActivationProcedure
CREATE QUEUE WITH ACTIVATION ON
CREATE SERVICE for QUEUE
CREATE EVENT NOTIFICATION to SERVICE
1 events enqueued
 
#3  SET AUTO_CLOSE ON                 --- database resources ---------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        2         3         1         3
Changed database context to 'master'.
wait a second...                             1         3         2         1
not shutdown                                 1         3         2         1
Changed database context to 'TestDb'.
initiate OFFLINE                             2         3         2         3
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
Disconnecting connection from BWUNDER-PC\ELEVEN...

Unfortunately AUTO_CLOSE does not pass this sniff test. It simply is not reliable under many common SQL configurations. It is not consistently good at returning resources to a busy desktop or in closing keys. A persistent connection, a daemon process, a scheduled job, replication, mirroring or Service Broker activation – among other things – can interfere with a clean shutdown leaving the database memory work space and cache always available for malicious perusal. AUTO_CLOSE would too easily becomes a phantom security layer. You might find some comfort that it is enabled but you can never be certain that the protection is working.

The best way to be sure a database is shut down when idle is to take the database OFFLINE. That would also require a step to bring the database online before each use. Given that necessity, detaching the database would also work with the added advantage that the database reference is removed from any server scoped metadata in the catalog.

Posted in Encryption Hierarchies, Secure Data, Testing | Leave a comment

Monitoring and Troubleshooting with sys.dm_os_ring_buffers

This post describes the care and feeding of the bastard circular queue in SQL Server exposed by the sys.dm_os_ring_buffer dynamic management view (DMV). The information in this ring buffer can give visibility to some important and always up-to-the-minute SQL Server health information:

  • security exceptions not returned to the client process for security reasons
  • exceptions raised at the sql_os level (way down there – think spid 1-50)
  • connections dropped by the server
  • help in identifying and understanding memory pressure
  • system resource utilization
  • CLR Integration scheduler State
  • Extended Events subsystem state

Ostensibly, the events selected for inclusion are what the Microsoft program groups and support engineers decided we don’t know we need to know. We have no control over which events find their way to this queue. Eventually, they did provide the ability to turn the entire feature on or off two service packs after introduction. Occasionally a new event type is added. To my knowledge there are 12 Record Types included for SQL Server 2012. VIEW_SYSTEM_STATE permission is required to see this data.

Known Record Types (with link to the query below for that type):

Ring Buffer Record Type added
RING_BUFFER_CLRAPPDOMAIN
RING_BUFFER_CONNECTIVITY 2008 RTM
RING_BUFFER_EXCEPTION
RING_BUFFER_HOBT_SCHEMAMGR 2012 RTM
RING_BUFFER_MEMORY_BROKER
RING_BUFFER_MEMORY_BROKER_CLERKS
RING_BUFFER_RESOURCE_MONITOR
RING_BUFFER_SCHEDULER
RING_BUFFER_SCHEDULER_MONITOR
RING_BUFFER_SECURITY_ERROR 2005 SP2
RING_BUFFER_XE_BUFFER_STATE
RING_BUFFER_XE_LOG

Other than a support disclaimer on the “SQL Server Operating System Related Dynamic Management Views (Transact-SQL)” jump page, sys.dm_os_ring_buffers is not documented in the Books Online. Several internal Microsoft bloggers have provided timely information around this DMV since it was introduced in SQL Server 2005. While by no means exhaustive, the following are good examples of such resources.

Lots of good resources.

Unfortunately I did not learn much about the RING_BUFFER_HOBT_SCHEMAMGR type in my recent search. It must be new in SQL Server 2012? All I see are operations where the action is ADD with a distribution of events across all databases on the instance. Seems to be more entries in databases where I have been active. I am not sure what to make of RING_BUFFER_HOBT_SCHEMAMGR. The XML is not terribly revealing:

<Record id=11type=RING_BUFFER_HOBT_SCHEMAMGRtime=1961318>
  <operationaction=ADDdbid=3version=144202 />
  <hobtid=0001:0000003Ctype=VISIBLE />
</Record>
 
 

The thing about all ring buffer record types is that each one has at least one underlying XML schema. There are not that many more schema’s than types – most types seem to have only one associated XML schema. Just beware of the oddballs.

When interrogating the ring buffer, T-SQL using SSMS is my preferred interface. The SSMS query tool’s one click access to the XML document from the query results grid output is helpful. It would be even better if Reporting Services was as friendly with XML typed data as the query tool’s results panel.

~~~~~~~~~~~~

The remainder of the post documents some sys.dm_os_ring_buffer queries that may be helpful to quickly determine what is in your ring buffer.

To see a summary of current records for the 12 types found in the queue by record type for SQL Server 2012 RTM:

SELECT known.[ring_buffer_type]
          , ISNULL( COUNT(buffer.[ring_buffer_type] ), 0 ) AS [type_count]
FROM ( SELECT ‘RING_BUFFER_BUFFER_POOL’ AS [ring_buffer_type] — do not see in 2012?
      UNION ALL
        SELECT ‘RING_BUFFER_CLRAPPDOMAIN’
      UNION ALL
       SELECT ‘RING_BUFFER_CONNECTIVITY’   — SQL 2008
      UNION ALL
       SELECT ‘RING_BUFFER_EXCEPTION’
      UNION ALL
       SELECT ‘RING_BUFFER_HOBT_SCHEMAMGR’  — SQL 2012
      UNION ALL
       SELECT ‘RING_BUFFER_MEMORY_BROKER’
      UNION ALL
       SELECT ‘RING_BUFFER_MEMORY_BROKER_CLERKS’
      UNION ALL
       SELECT ‘RING_BUFFER_OOM’                              — do not see in 2012?
      UNION ALL
       SELECT ‘RING_BUFFER_RESOURCE_MONITOR’
      UNION ALL
       SELECT ‘RING_BUFFER_SCHEDULER’
      UNION ALL
       SELECT ‘RING_BUFFER_SCHEDULER_MONITOR’
      UNION ALL
       SELECT ‘RING_BUFFER_SECURITY_ERROR’   — SQL 2005 SP2
      UNION ALL
       SELECT ‘RING_BUFFER_SINGLE_PAGE_ALLOCATOR’            — do not see in 2012?
      UNION ALL
       SELECT ‘RING_BUFFER_XE_BUFFER_STATE’
      UNION ALL
       SELECT ‘RING_BUFFER_XE_LOG’ ) AS known
LEFT JOIN sys.dm_os_ring_buffers buffer
ON buffer.[ring_buffer_type] = known.[ring_buffer_type]
CROSS JOIN sys.dm_os_sys_info info
GROUP BY known.[ring_buffer_type]
 
 
Shredding the XML to render XML elements as columnar data attributes makes it a little easier to understand what is swinging through the ring at the present moment. Note that the SELECT statement in the common table expression can be executed to retrieve the detail rows used to build the aggregate.
  
WITH BufferTypeCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ )
                                     , Data.[Address]
                                     , Data.[Type] ) AS [RowNumber]
          , Data.[Address]
          , Data.[Type]
                 , CASE
              WHEN Data.[Type] = ‘RING_BUFFER_CLRAPPDOMAIN’
                THEN Buffer.Record.value(‘Action[1]’, ‘NVARCHAR(128)’)
              WHEN Data.[Type] = ‘RING_BUFFER_CONNECTIVITY’
                THEN Buffer.Record.value( ‘(ConnectivityTraceRecord/RecordType)[1]’
                                        , ‘NVARCHAR(128)’)
              WHEN Data.[Type] = ‘RING_BUFFER_EXCEPTION’
                THEN Buffer.Record.value( ‘(Exception/Error)[1]’, ‘NVARCHAR(128)’)
              WHEN Data.[Type] = ‘RING_BUFFER_HOBT_SCHEMAMGR’
                THEN DB_NAME( Buffer.Record.value( ‘operation[1]/@dbid’, ‘INT’) )
              WHEN Data.[Type] = ‘RING_BUFFER_MEMORY_BROKER’
                THEN Buffer.Record.value( ‘(MemoryBroker/Broker)[1]’
                                        , ‘NVARCHAR(128)’ )
              WHEN Data.[Type] = ‘RING_BUFFER_MEMORY_BROKER_CLERKS’
                THEN Buffer.Record.value( ‘(MemoryBrokerClerk/Name)[1]’
                                        , ‘NVARCHAR(128)’ )
              WHEN Data.[Type] = ‘RING_BUFFER_RESOURCE_MONITOR’
                THEN Buffer.Record.value( ‘(ResourceMonitor/Notification)[1]’
                                        , ‘NVARCHAR(128)’ )
              WHEN Data.[Type] = ‘RING_BUFFER_SCHEDULER’
                THEN Buffer.Record.value( ‘(Scheduler/Action)[1]’
                                        , ‘NVARCHAR(128)’)
              WHEN Data.[Type] = ‘RING_BUFFER_SCHEDULER_MONITOR’
                THEN ‘heartbeat’
              WHEN Data.[Type] = ‘RING_BUFFER_SECURITY_ERROR’
                THEN FORMATMESSAGE(‘ErrorCode: [%s], SQLErrorCode: [%s]’
                                    , Buffer.Record.value( ‘(Error/ErrorCode)[1]’
                                                         , ‘NVARCHAR(128)’ )
                                    , Buffer.Record.value( ‘(Error/SQLErrorCode)[1]’
                                                         , ‘NVARCHAR(128)’ ) )
              WHEN Data.[Type] = ‘RING_BUFFER_XE_BUFFER_STATE’
                THEN Buffer.Record.value( ‘(XE_BufferStateRecord/NewState)[1]’
                                        , ‘NVARCHAR(128)’ )
              WHEN Data.[Type] = ‘RING_BUFFER_XE_LOG’
                THEN Buffer.Record.value( ‘(XE_LogRecord/@message)[1]’
                                        , ‘NVARCHAR(128)’ )
              ELSE Data.[Type]  — the next new one
              END AS [Element]
                         , DATEADD( SECOND
                    ,( Buffer.Record.value( ‘@time’
                                           , ‘BIGINT’ ) Data.ms_ticks ) / 1000 
                    , GETDATE() ) AS [Time]    
                         , Buffer.Record.value(‘@id’, ‘INT’) AS [Id]
          , Data.EventXML
    FROM (SELECT CAST(b.Record AS XML) AS EventXML
                , b.ring_buffer_address AS [Address]
                , b.ring_buffer_type AS [Type]
                , i.ms_ticks
          FROM sys.dm_os_ring_buffers b
          CROSS JOIN sys.dm_os_sys_info i ) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
    –ORDER BY RowNumber
   )  
SELECT first.[Type]
               , summary.[Element]
               , summary.[count]
               , first.[Time] AS [FirstTime]
               , last.[Time] AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [Type]
           , [Element]
           , COUNT( RowNumber ) AS [count]
           , MIN( RowNumber ) AS [FirstRow]
           , MAX( RowNumber ) AS [LastRow]
      FROM BufferTypeCTE
      GROUP BY [Type], [Element] ) AS summary
JOIN BufferTypeCTE AS first
ON first.RowNumber = summary.[FirstRow]
JOIN BufferTypeCTE AS last
ON last.RowNumber = summary.[LastRow]
–ORDER BY [FirstTime];
–ORDER BY [LastTime];
ORDER BY [Type], [Element];

~~~~~~~~~~~~~~

And finally, some queries for deeper shredding of specific record types & XML schemas, I’ll spare us the output screen shots. Just try ‘em out if you see that type in your queue using the above.If nothing else you get the XML to help you write the query you want to see.

If I know of any resources for a record type – other than the ones already listed – I will include them near the query for that record type.

RING_BUFFER_CLRAPPDOMAIN

(“It’s 3 AM. Do you know where your appdomain is?”)

WITH CLRAppDomainCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER
( ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                       , Buffer.Record.value( ‘@id’
                                            , ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘Action[1]’, ‘NVARCHAR(128)’) AS [Action]
         , Buffer.Record.value( ‘AppDomain[1]/@dbid’
                              , ‘INT’) AS [AppDomainDbId]
         , Buffer.Record.value( ‘AppDomain[1]/@ownerid ‘
                              , ‘INT’) AS [AppDomainOwnerId]
         , Buffer.Record.value( ‘AppDomain[1]/@type’
                              , ‘NVARCHAR(128)’) AS [AppDomainType]
         , Buffer.Record.value( ‘(AppDomain/State)[1]’
                              , ‘NVARCHAR(128)’) AS [AppDomainState]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value(‘@id’, ‘INT’) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(b.Record AS XML) AS EventXML
               , b.ring_buffer_type
          FROM sys.dm_os_ring_buffers AS b
          WHERE ring_buffer_type = ‘RING_BUFFER_CLRAPPDOMAIN’) AS Data
    CROSS APPLY Data.EventXML.nodes(‘//Record’) AS Buffer(Record)
   )  
SELECT first.[Type]
     , summary.[Action]
     , summary.[count]
     , DATEADD( SECOND
              ,( first.[time] info.ms_ticks ) / 1000
              , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( SECOND
               ,( last.[time] info.ms_ticks ) / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [Action]
           , COUNT( RowNumber ) AS [count]
           , MIN( RowNumber ) AS [FirstRow]
           , MAX( RowNumber ) AS [LastRow]
      FROM CLRAppDomainCte
      GROUP BY [Action] ) AS summary
JOIN CLRAppDomainCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN CLRAppDomainCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [Action]; 

RING_BUFFER_CONNECTIVITY

Protocols team post mentioned above talks about identifying domain authentication problems with these records.

First available SQL Server 2008 RTM

WITH ConnectivityCte
AS (
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘(ConnectivityTraceRecord/RecordSource)[1]’
                              , ‘NVARCHAR(128)’) as [RecordSource]
         , Buffer.Record.value( ‘(ConnectivityTraceRecord/RecordType)[1]’
                              , ‘NVARCHAR(128)’) as [RecordType]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value( ‘@id’, ‘INT’) AS [Id]
         , Data.EventXML
    FROM ( SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type 
           FROM sys.dm_os_ring_buffers
           WHERE ring_buffer_type = ‘RING_BUFFER_CONNECTIVITY’ ) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[RecordSource]
     , summary.[RecordType]
     , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [RecordSource]
           , [RecordType]
           , COUNT(*) AS [count]
           , MIN(RowNumber) AS [FirstRow]
           , MAX(RowNumber) AS [LastRow]
                     FROM ConnectivityCte
      GROUP BY [RecordSource], [RecordType] ) AS summary
JOIN ConnectivityCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN ConnectivityCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY summary.[FirstRow]; 

RING_BUFFER_EXCEPTION

There should be no errors of unknown origin and a plan to eliminate the rest. It pays to dig deep to understand each and every exception shown.

WITH ExceptionCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value(‘(Exception/Error)[1]’, ‘int’) AS [Error]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value(‘@id’, ‘int’) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type 
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_EXCEPTION’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[Error]
     , CASE WHEN msg.message_id IS NOT NULL 
            THEN msg.text
            ELSE ‘no sys.messages row for Error’
            END AS [External Message Info]
     , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM ( SELECT [Error]
            , COUNT(*) AS [count]
            , MIN(RowNumber) AS [FirstRow]
            , MAX(RowNumber) AS [LastRow]
       FROM ExceptionCte
       GROUP BY [Error] ) AS summary
JOIN ExceptionCTE AS first
ON first.RowNumber = summary.[FirstRow]
JOIN ExceptionCTE AS last
ON last.RowNumber = summary.[LastRow]
LEFT JOIN sys.messages msg
ON summary.Error = msg.message_id
AND msg.language_id = 1033
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [Error];

RING_BUFFER_HOBT_SCHEMAMGR

WITH HOBTSchemaMgrCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘operation[1]/@action’ , ‘NVARCHAR(128)’) AS [Action]
         , Buffer.Record.value( ‘operation[1]/@dbid’ , ‘INT’) AS [DbId]
         , Buffer.Record.value( ‘operation[1]/@version’, ‘INT’ ) AS [Version]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value( ‘@id’, ‘BIGINT’ ) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_HOBT_SCHEMAMGR’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[DbId]
     , DB_NAME(summary.[DbId]) AS DbName
              , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [DbId]
           , COUNT(*) AS [count]
           , MIN([RowNumber]) AS [FirstRow]
           , MAX([RowNumber]) AS [LastRow]
       FROM HOBTSchemaMgrCte
       GROUP BY [DbId] ) AS summary
JOIN HOBTSchemaMgrCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN HOBTSchemaMgrCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY DbId;

RING_BUFFER_MEMORY_BROKER

“Actions the internal memory broker is taking to balance the memory between caches.“ – Bob Dorr
http://blogs.msdn.com/b/slavao/archive/2005/11/05/489459.aspx

WITH MemoryBrokerCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value(‘(MemoryBroker/Pool)[1]’, ‘INT’) AS [Pool]
         , Buffer.Record.value(‘(MemoryBroker/Broker)[1]’, ‘NVARCHAR(128)’) AS [Broker]
         , Buffer.Record.value(‘(MemoryBroker/Notification)[1]’, ‘NVARCHAR(128)’) AS [Notification]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value(‘@id’, ‘int’) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
                      , ring_buffer_type
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_MEMORY_BROKER’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[Pool]
     , summary.[Broker]
     , summary.[Notification]
     , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [Pool]
           , [Broker]
           , [Notification]
           , COUNT(*) AS [count]
           , MIN(RowNumber) AS [FirstRow]
           , MAX(RowNumber) AS [LastRow]
      FROM MemoryBrokerCte
      GROUP BY [Type]
             , [Pool]
             , [Broker]
             , [Notification] ) AS summary
JOIN MemoryBrokerCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN MemoryBrokerCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [Type]
       , [Pool]
       , [Broker]
       , [Notification];
  

RING_BUFFER_MEMORY_BROKER_CLERKS

Slava Oks’ posts are the best place to read about memory broker clerks

(see sys.dm_os_memory_clerks in BOL for more about memory clerks.)

WITH MemoryBrokerClerksCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘(MemoryBrokerClerk/Name)[1]’, ‘NVARCHAR(128)’ ) AS Name
         , Buffer.Record.value( ‘(MemoryBrokerClerk/TotalPages)[1]’, ‘INT’ ) AS TotalPages
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value( ‘@id’, ‘INT’ ) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_MEMORY_BROKER_CLERKS’ ) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[Name]
     , summary.[TotalPages]
     , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [Name]
           , [TotalPages]
           , COUNT(*) AS [count]
           , MIN(RowNumber) AS [FirstRow]
           , MAX(RowNumber) AS [LastRow]
      FROM MemoryBrokerClerksCte
      GROUP BY [Type]
             , [Name]
             , [TotalPages] ) AS summary
JOIN MemoryBrokerClerksCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN MemoryBrokerClerksCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [Type], [Name]; 

RING_BUFFER_RESOURCE_MONITOR

(The CSS link above also speaks to the Resource Monitor ring buffer records.)

“allows you to see memory state changes due to various types of memory pressure.” – Christian Lefter

Actions the internal memory broker is taking to balance the memory between caches.“ – Bob Dorr

WITH ResourceMonitorCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘(ResourceMonitor/Notification)[1]’, ‘NVARCHAR(128)’ ) AS [ResourceNotification]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value( ‘@id’, ‘INT’ ) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_RESOURCE_MONITOR’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[ResourceNotification]
            , summary.[count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM (SELECT [ResourceNotification]
           , COUNT(*) AS [count]
           , MIN(RowNumber) AS [FirstRow]
           , MAX(RowNumber) AS [LastRow]
      FROM ResourceMonitorCte
      GROUP BY [ResourceNotification] ) AS summary
JOIN ResourceMonitorCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN ResourceMonitorCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [ResourceNotification]; 

RING_BUFFER_SCHEDULER

(Don’t see many examples of using this data. Instead more people seem to find meaning from the RING_BUFFER_SCHEDULER_MONITOR records)

Actual scheduler activity such as context switching. You can reconstruct the execution order from these entries.“ – Bob Dorr

WITH SchedulerCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value(‘(Scheduler/Action)[1]’, ‘NVARCHAR(128)’) AS [Action]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value(‘@id’, ‘BIGINT’) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type 
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_SCHEDULER’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[Action]
     , summary.[Count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM ( SELECT [Action]
                   , COUNT(*) AS [Count]
            , MIN(RowNumber) AS [FirstRow]
            , MAX(RowNumber) AS [LastRow]
       FROM SchedulerCte
       GROUP BY [Action] ) AS summary
JOIN SchedulerCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN SchedulerCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY FirstTime;
–ORDER BY LastTime DESC; 

RING_BUFFER_SCHEDULER_MONITOR

“[tell’s us] what is the state of the logical schedulers, the health record type is very helpful.“ – Bob Dorr

WITH SchedulerMonitorCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]’, ‘bigint’) AS [ProcessUtilization]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]’, ‘bigint’) AS [SystemIdle]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]’, ‘bigint’) AS [UserModeTime]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]’, ‘bigint’) AS [KernelModeTime]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/PageFaults)[1]’, ‘bigint’) AS [PageFaults]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]’, ‘bigint’) AS [WorkingSetDelta]
         , Buffer.Record.value(‘(SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]’, ‘bigint’) AS [MemoryUtilization]
                            , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time] 
         , Buffer.Record.value(‘@id’, ‘int’) AS [Id]
         , Data.EventXML
    FROM ( SELECT CAST(Record AS XML) AS EventXML
                , ring_buffer_type
           FROM sys.dm_os_ring_buffers
           WHERE ring_buffer_type = ‘RING_BUFFER_SCHEDULER_MONITOR’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
     , summary.[Count]
     , DATEADD( second
               , first.[Time] info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [FirstTime]
     , DATEADD( second
               , last.[Time]  info.ms_ticks / 1000
               , CURRENT_TIMESTAMP ) AS [LastTime]
     , first.EventXML AS [FirstRecord]
     , last.EventXML AS [LastRecord]
FROM ( SELECT COUNT(RowNumber) AS [count]
            , MIN(RowNumber) AS [FirstRow]
            , MAX(RowNumber) AS [LastRow]
       FROM SchedulerMonitorCte ) AS summary
JOIN SchedulerMonitorCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN SchedulerMonitorCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info;

RING_BUFFER_SECURITY_ERROR

http://blogs.msdn.com/b/psssql/archive/2008/03/24/how-it-works-sql-server-2005-sp2-security-ring-buffer-ring-buffer-security-error.aspx

WITH SecurityErrorCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value( ‘(Error/SPID)[1]’, ‘INT’ ) AS [SessionId]
         , Buffer.Record.value( ‘(Error/ErrorCode)[1]’, ‘NVARCHAR(128)’ ) AS [ErrorCode]
         , Buffer.Record.value( ‘(Error/SQLErrorCode)[1]’, ‘NVARCHAR(128)’ ) AS [SQLErrorCode]
         , Buffer.Record.value( ‘(Error/APIName)[1]’, ‘NVARCHAR(128)’ ) AS [APIName]
         , Buffer.Record.value( ‘(Error/CallingAPIName)[1]’, ‘NVARCHAR(128)’ ) AS [CallingAPIName]
                            , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time] 
         , Buffer.Record.value(‘@id’, ‘INT’) AS [Id]
         , Data.EventXML   
                     FROM (SELECT CAST(Record AS XML) AS EventXML
                                   , ring_buffer_type
                                  FROM sys.dm_os_ring_buffers
                                  WHERE ring_buffer_type = ‘RING_BUFFER_SECURITY_ERROR’) AS Data
                     CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
    )
  SELECT first.[Type]
       , summary.[ErrorCode]
       , summary.[SQLErrorCode]
       , summary.[APIName]
       , summary.[CallingAPIName]
       , summary.[Count]
       , DATEADD( second
                 , first.[Time] info.ms_ticks / 1000
                 , CURRENT_TIMESTAMP ) AS [FirstTime]
       , DATEADD( second
                 , last.[Time]  info.ms_ticks / 1000
                 , CURRENT_TIMESTAMP ) AS [LastTime]
       , first.EventXML AS [FirstRecord]
       , last.EventXML AS [LastRecord]
  FROM ( SELECT [ErrorCode]
              , [SQLErrorCode]
              , [APIName]
              , [CallingAPIName]
              , COUNT(*) AS [count]
              , MIN(RowNumber) AS [FirstRow]
              , MAX(RowNumber) AS [LastRow]
         FROM SecurityErrorCte
         GROUP BY [ErrorCode]
                , [SQLErrorCode]
                , [APIName]
                , [CallingAPIName] ) AS summary
JOIN SecurityErrorCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN SecurityErrorCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
ORDER BY [FirstTime];
–ORDER BY [LastTime] DESC;

RING_BUFFER_XE_BUFFER_STATE

As Extended Events take over for SQL Trace, XE buffer management will become more necessary and this type could become more interesting.

WITH XEBufferStateCte
AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                     , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
         , Data.ring_buffer_type AS [Type]
         , Buffer.Record.value(‘(XE_BufferStateRecord/NewState)[1]’, ‘varchar(100)’) as [NewState]
         , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time]
         , Buffer.Record.value(‘@id’, ‘int’) AS [Id]
         , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
               , ring_buffer_type 
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type =
‘RING_BUFFER_XE_BUFFER_STATE’
) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
   )
SELECT first.[Type]
      , summary.[NewState]
      , summary.[Count]
      , DATEADD( second
                , first.[Time] info.ms_ticks / 1000
                , CURRENT_TIMESTAMP ) AS [FirstTime]
      , DATEADD( second
                , last.[Time]  info.ms_ticks / 1000
                , CURRENT_TIMESTAMP ) AS [LastTime]
      , first.EventXML AS [FirstRecord]
      , last.EventXML AS [LastRecord]
FROM (SELECT [NewState]
           , COUNT(*) AS [count]
           , MIN(RowNumber) AS [FirstRow]
           , MAX(RowNumber) AS [LastRow]
      FROM XEBufferStateCte
      GROUP BY [NewState] ) AS summary
JOIN XEBufferStateCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN XEBufferStateCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
–ORDER BY [FirstTime];
ORDER BY [LastTime] DESC;

RING_BUFFER_XE_LOG

WITH XELogCte

AS (
           — select & run this query for a list of records in the queue
    SELECT ROW_NUMBER() OVER (ORDER BY Buffer.Record.value( ‘@time’, ‘BIGINT’ )
                                    , Buffer.Record.value( ‘@id’, ‘INT’ ) ) AS [RowNumber]
        , Data.ring_buffer_type AS [Type]
        , Buffer.Record.value(‘(XE_LogRecord/@message)[1]’, ‘varchar(max)’) AS [XE_LogRecord]
        , Buffer.Record.value( ‘@time’, ‘BIGINT’ ) AS [time] 
        , Buffer.Record.value(‘@id’, ‘int’) AS [Id]
        , Data.EventXML
    FROM (SELECT CAST(Record AS XML) AS EventXML
                , ring_buffer_type 
          FROM sys.dm_os_ring_buffers
          WHERE ring_buffer_type = ‘RING_BUFFER_XE_LOG’) AS Data
    CROSS APPLY EventXML.nodes(‘//Record’) AS Buffer(Record)
    )
SELECT first.[Type]
      , summary.[XE_LogRecord]
      , summary.[Count]
      , DATEADD( second
                , first.[Time] info.ms_ticks / 1000
                , CURRENT_TIMESTAMP ) AS [FirstTime]
      , DATEADD( second
                , last.[Time]  info.ms_ticks / 1000
                , CURRENT_TIMESTAMP ) AS [LastTime]
      , first.EventXML AS [FirstRecord]
      , last.EventXML AS [LastRecord]
FROM ( SELECT [XE_LogRecord]
            , COUNT(*) AS [count]
            , MIN(RowNumber) AS [FirstRow]
            , MAX(RowNumber) AS [LastRow]
       FROM XELogCte
       GROUP BY [XE_LogRecord]) AS summary
JOIN XELogCte AS first
ON first.RowNumber = summary.[FirstRow]
JOIN XELogCte AS last
ON last.RowNumber = summary.[LastRow]
CROSS JOIN sys.dm_os_sys_info AS info
–ORDER BY [FirstTime];
ORDER BY [LastTime] DESC;

Posted in Monitoring | 5 Comments

T-SQL Cryptographic Patterns – part 8: leaving the key under the doormat

As if preventing sensitive data leaks using the obfuscation capabilities of SQL Server’s event subsystems weren’t enough of a challenge (see part 7) , it is the risk associated with storing encryption keys alongside the data that has been the strongest deterrent to using T-SQL cryptography if there is another choice. It is getting better but then so are the attackers…

Anyone who has experimented with DBCC PAGE() or DBCC SQLPERF() or DBCC MEMUSAGE() until that became boring – or anyone unfortunate enough to need to recover lost bits from a corrupt database knows that every bit in SQL Server’s memory space is readily available given an adequate level of access.

There are many debuggers, tracing engines and logging resources that expose bits in memory, in cache or on the wire. Some, like the SSMS or Visual Studio (remote) debuggers specifically target SQL Server. Others are less restrictive (more powerful?).

Consider:

  • the Visual Studio WCF Listener and ATL trace tools
  • ODBC trace
  • Event Tracing for Windows
  • Windbg.exe
  • .NET System.Diagnostics namespace
  • Java Debug Interface (JDI) trace
  • TDS analyzers (Ribo, WireShark)
  • Countless network sniffers
  • Add you favorite dirty dozen

These are useful tools. The problem is they are as useful to the malicious as they are to the constructive.

There are also indirect helpers available to the observant. From Books Online, for example, can be learnt that a database Master Key lives in the database memory space and is not encrypted.  Could be useful information to anyone bent on pirating the encryption hierarchy of a SQL Server. I have no doubt that a hacking expedition could sooner or later find the right buffers with the right tool, perhaps goaded by the factual nature of the target. Even SQL Injection – depending upon the authority of the hijacked SPID – has some opportunity to mine memory objects.  The ready availability of many resources leave no question in my mind that mastery of the memory space is attainable to the determined.

Other opened keys and certificates, in addition to the Master Key may be available in memory as well? I cannot say how difficult it might be to recover any of those keys. I can see that access could depend upon several factors, most external to SQL Server. Nonetheless, I have no doubt that any key can be recovered from memory. Once a someone with an ability to locate a key in memory is in possession of the key’s bits it could be only a matter of injecting the bits into the registers used by a key or certificate of the same name on a SQL Server back in the hackers lab to be able to decode stolen cipher text.

The best prevention is to securely isolate the keys from the data. For what-ever reasons, the designers and developers of SQL Server decided to disregard this principal of cryptography. The genuine need lost out to least effort yet again perhaps? At any rate, the SQL Server encryption implementation is hindered by compromises that are clearly strategic or political – as opposed to technical – in nature. EKM looks to be not too much more than wallpaper to hide the inherent weakness of the implementation. Instead of pursuing a solution that separates the keys and the data, the canned workaround story-line reads something like this to me, “So! You don’t like our cryptography? OK fine! Take this old OLEDB hook that we will never use – and may even deprecate soon – and add your own damn cryptography – and oh, by the way you are on your own if you use the hook.” Frankly I don’t get it. SQL Server is made up of numerous services that can be deployed and secured: DB engine, Analysis Services, Reporting Services, SSIS, etc. What is the big deal about adding an encryption service that does about what you get now only without out disregard for security best practice?

I’m thinking: if I have to write my own cryptography or plop down lots-o-dollars for an encryption appliance or HSM, how much harder or more expensive is it to just write my own encrypted data store? One that I know exploits Intel’s AES-NI perhaps and does not silently feed events to ETW. One with a “translucent” workspace that no one can see as clear text in toto. One that is far less likely than a Microsoft product to attract an army of hackers armed with a bevy of free hacking tools.

Regardless, it is within the realm of possibility to achieve a measure of secure and layered isolation of data and keys stored within SQL Server. One layer is the audit capabilities of Windows and SQL Server. The absence of audited change will increase confidence that a compromise has or has not occurred. Another layer must assure that keys from one SQL Server are used safely and securely when encrypting data from another SQL Server. Two SQL Servers are required to pull it off: one to store the keys and another to store the data. The good news is only one of them needs to do encryption so even SQL Azure is a suitable candidate for the one that stores the cipher text. The keys and certificates must be on a real or virtual SQL Server, SQL Azure won’t work. This is admittedly a effort of moderate complexity. To convey that complexity I wish to induce a mental shift in the reader away from the SQL Server internals perspective expanded upon in the previous post toward an application architectural perspective.

To facilitate the shift, kindly zoom your attention out to this architectural view of the BackupServiceMasterKey stored procedure’s application. The procedure and the business context have been previously introduced in this series (see Dog Food).  BackupServiceMasterKey is a method from an encrypted secure catalog schema for end-to-end SQL Server Encryption Hierarchy Management. The schema is found in the database labeled spoke 1 in the diagram below. spoke 1 could be on any on-premise, virtual or private-cloud SQL Server 2012 instance. The databases labeled spoke 2, 3 and n also hold a copy of the schema: one per SQL instance.

The backup files are sent to the hub and stored at the hub as VARBINARY typed data. There is no file system access to the backup at the hub.

Metadata changes land on hub A as an already encrypted data store. There are no keys or certificates at the hub. The hub is primarily a passive offsite data store. The hub serves as best practice secure offsite storage for key backups.

Locate the start here marker in the spoke 1 diagram to follow the execution of BackupServiceMasterKey. The Service Master Key export is output to a local backup file and then relayed to a remote location at hub A.  Metadata associated with key backup file (i.e., SQL Server object name, key identifier, file paths, file name, passphrases, etc.) is encrypted at the spoke and also sent to the offsite location. A spoke can see the rows from all other spokes. A spoke may or may not be able decrypt data encrypted on another spoke. (so sees only cipher text).

Following the key’s traverse you may have noticed:

  • Spoke Administrator action is required to initiate a backup, pull a file from offsite or restore a key from backup.
  • The Service Master Key, db Master Keys and Certificates from all database on the spoke are eligible for backup/restore
  • Keys are encrypted by a phrase when exported to backup FILES in folder with write access restricted to the SQL Server service account
  • Phrases and other sensitive information are encrypted into a secure Encryption Hierarchy Administration catalog schema.
  • Upon backup a message is sent to the local service broker initiating an asynchronous, guaranteed offsite copy of the backup to the hub.
  • The Spoke Broker uses OPENROWSET(BULK ’’, SINGLE_BLOB) to insert the bits of the backup file into a VARBINARY column at the hub
  • Change Data Capture changes of the catalog schema are also delivered offsite using the local service broker.
  • The hub has no query access to spoke data. All data is pushed/pulled by the spoke.
  • The hub is a passive data store.  The store can be presented as a distributed partitioned view if more than one hub is present.
  • Spoke Admins do not require access to the hub in the workflow but are granted global select permissions for research and support.
  • A SQL Server virtual FILETABLE folder is mounted under the local backup location at each spoke.
  • Spoke Admin Role Members can recall backups from the hub’s Backups table to the spoke’s FILETABLE.
  • The FILTABLE’s UNC address can be used as the source of a key or certificate restore.

Hijacking a Spoke Admin account would provide no access to any data rows on the hub that did not originate at the current spoke using the current encryption hierarchy. Yes the hi-jacked account could query the global catalog schema, but without the encryption hierarchy from the encrypting SQL Server, the raw data will be of no use. The hub requires no cryptographic support yet can be the secure offsite storage for the encryption hierarchy backups from a large number of SQL Servers.

SQL Azure can host the hub database. Let me say that differently: SQL Azure can host the hub database! Do you realize what just happened to the cost excuse for not keeping a secure offsite copy of all key and certificate backups as is recommended best practice in Books Online. The hub does not need to know about encryption but is storing a deeply encrypted data set and is able to support meaningful queries. The fact that the data actually includes the encryption keys used to encode the data is hopefully interesting but not important to the concept of separation of keys and certificates from encoded data to prevent data leaks

The spoke is still vulnerable to compromises that might allow the malicious user to gain access to open local keys and decipher data previously encoded by this spoke. To better protect the spoke’s encryption hierarchy, the copy of the Master Key for the database were the catalog schema is created that is encrypted into the master database by the Service Master Key is dropped. Only the phrase encrypted copy of the Master Key stored in the database is kept. The result is that the phrase must be presented to open the key. This probably doesn’t scale well much beyond 1-2 people needing to know the phrase on a particular spoke, but is effective in restricting even sysadmin members without that phrase from decoding the cipher text.

Posted in Encryption Hierarchies, Secure Data | Leave a comment

T-SQL Cryptographic Patterns – part 7: who was that masked man?

SQL Server 2012 gives the DBA a shove toward Extended Events – and away from SQL Trace – with regard to monitoring. The trace catalog views are marked for deprecation with advice to use Extended Events. Extended Events do a better job of decoupling the application workload and the monitoring workload. Extended Events provide a more complete one-stop event-ing experience over SQL Trace resulting in a somewhat more flexible and somewhat deeper insight into performance and security events. 

An SSMS Extended Events properties dialog + wizard are added in SQL Server 2012 that is similar to Profiler. Creating a session can be as easy as selecting a template and taking the defaults. Of course, T-SQL can also be used to admin event sessions or to query event targets or to query the extended events catalog views and DMVs.

Extended Events Sessions are better able to limit performance impact than trace.

  • For starters extended events takes the insert into the collection target – be that a ring buffer, table or file – out of the monitored user process.
  • Then, while creating the session, the EVENT_RETENTION_MODE setting resists the = NO_EVENT_LOSS value by throwing up an “are you sure” pop-up when selected that warns of potential performance issues – not unlike SQL Trace. The other two possible EVENT_RETENTION_MODEs indicate memory reclamation strategies that determine the data chunk size to be thrown on the floor when event output cannot keep up: ALLOW_SINGLE_EVENT_LOSS discards an event at a time and ALLOW_MULTIPLE_EVENT_LOSS discards a buffer of events at a time.
  • And, as shown in the UI screen shot, a filter operator on the session_id provides yet another way to throw the monitor’s collected data on the floor. The divides_by_unit64 operator means that events where the Field column value can be divided by Value column value evenly will be collected. A Value of 2 would cause the session to attempt to collect every other qualifying event. A value of 5 as shown below – the defaults from the Query Wait Statistics template – indicates that every 5th qualifying event will be collected into the session target.
Extended Event filter operators in SSMS 2012

Of course T-SQL’s CREATE/ALTER/DROP EVENT SESSION DDL will do everything the GUI will do.

SQL Server Event Output Obfuscation

SQL Server’s event-ing subsystems attempt to obfuscate sensitive information captured in the event output. SQL Trace, Alerts, Event Notifications, DDL Triggers, Extended Events and SQL Audit for example will all mask DDL passwords/phrases and statements involving a cryptographic method. However debuggers, OS/network trace tools and many other process monitors are not necessarily subject to this SQL Server event obfuscation.

It is tempting to want to exploit event output obfuscation in the effort to increase protection of sensitive data. Unfortunately SQL Server’s event output obfuscation provides no protection for text values passed to SQL Server or between sql_modules. This is a fundamental limitation for the pragmatic application of SQL Server cryptography: most data simply does not originate at the database. The best place and time to encrypt sensitive data is where the data is originated as it is originated. Waiting until data reaches the database to scramble the bits increases the window of opportunity for a leak or compromise – precisely the type of leak that SQL Server event obfuscation pretends to plug. When cipher text is the only data passed through the database connection, there is far less chance that the clear text might be unknowingly revealed to any snoops on the ride to database or to any snoops watching a SQL Server event output stream.

Similar obfuscation leaks can occur after the data reaches SQL Server. The more difficult offenders to identify are likely to be passed parameters and/or dynamic SQL. Assignment operations are also potentially problematic. Building a string using T-SQL assignment expressions that will subsequently be dynamically executed can leak while building the string and again when the dynamic statement is submitted for execution.

SQL Server originated events pull the clear text of any alpha-numeric value passed to a sql_module as clear text. AS a result, intercepting input using Profiler is simplistic given the necessary permission (ALTER TRACE). Proactive monitoring that can trigger intelligent remedies not only when intrusion is detected but any time that know-to-leak events are raised – is necessary to prevent obfuscation leaks: a monitor of system monitoring events…

In the balance, SQL Server event output obfuscation is not all that usable or even useful in its current state. Mostly, it’s just there. Not much the DBA can do with it or to it. In order to protect the event output it is necessary to combine the obfuscation that is in place with assignment operations for any sensitive values to successfully obfuscate the output. This is burdensome because the only way to know you have something that works is to collect events and study the output. In the context of protecting sensitive information such coding cost and complexity is unacceptable.

So, how could it be better?

Maybe if there were declarative guarantees of obfuscation for user designated columns and variables – in all circumstance and for all event subsystems. Sorta like a lock hint or an index – then event obfuscation could become a tool. But this is certainly not something Microsoft – or most any software vendor – would do without a loud clamor from customers. Since most applications store sensitive data as clear text in the database and/or sparingly use encryption as required for compliance to a minimal threshold, I see no reason to anticipate an uproar. No big deal I reckon. SQL Server was already a second rate place to implement most cryptography because it doesn’t originate much data.

Declarative control of obfuscation could help prevent called modules and dynamic SQL Server leaks in process, but has no effect on data passed in to SQL Server. If sensitive data that will be passed to SQL Server must be protected from SQL Server event leakage, then encrypt it before sending to the database. And then only decrypt at the database if it is required to query the data.

I can use Extended Events instead of SQL Trace. No Problem there. It is an improvement. However SQL Trace is not going away. That means if I want to roll out a schema or change scripts that contains sensitive information as clear text and have any concerns that someone might compromise the data by snooping on the SQL Server activity, I need to be sure to include 2 (additional ?) work items in the change requirements:

  1. Identify or develop suitable monitors to identify SQL Server event subsystem snoopers. 
  2. Code and test the change such that I am certain that no clear text leaks are exposed though any event of any event subsystem.

May as well jump in to a black hole…

SQL Server 2012 RTM extended events do not give up much info about the obfuscated text.

<event name=sp_statement_starting“…

<value>*encrypt—————–/~/—</value>

</event>

SQL Trace – and so Profiler – is somewhat more informative to the snooper, revealing the operation that triggered the obfuscation.

<Column id=“1″ name=“TextData”>
– ‘OPEN SYMMETRIC KEY’ was found in the text of this event.
– The text has been replaced with this comment for security reasons.
       </Column>

Event Notification obfuscation leaves only “secrets” to the imagination/investigation, revealing more configuration information than obfuscated trace output. In Profiler I could see a key was opened. In the Event Notification I can also see that a password was used.

  <TextData>OPEN SYMMETRIC KEY ErrorKey
    DECRYPTION BY PASSWORD = ‘******’;  </TextData>

SQL Audit obfuscates about like Event Notifications. I would have expected that SQL Audit would work like an extended event. ? Go figure.

Statement               OPEN SYMMETRIC KEY ErrorKey

                DECRYPTION BY PASSWORD = ‘******’;

It not so important that monitoring and the audit trail rely upon extended events or trace events or notification events or Weekly Reader events or all of the above. It is important to consider all data bridges including each of the various SQL Server event streams if the plan is to detect sensitive data bridges and prevent leaks by not supplying unauthorized bridges with sensitive data.

Extended Event obfuscation in SQL Server 2012 seems to remain an undocumented feature so likely remains otherwise incomplete. The highest certainty will come from collection and detail analysis of trace and event session output in search of any secrets revealed in the event stream.

Whether using Extended Events or SQL Trace, or a similarly tedious study of any of the several events that do not directly map to user input; things like lock acquired/lock released or scan started/scan stopped, may be enough information to betray an application. Defense-in-depth demands that access to SQL Trace and Extended Events be strictly controlled and closely monitored in addition to solid cryptograpic protection.

Stored procedures can be trained to alert or abort if a white-listed Extended Event session is not active or is active but not found in a white-list. Traces are known by a system defined trace_Id instead of a friendly target name. The safest way to determine if you ought to white-list a trace or not is to analyze the definition: if it leaks, don’t white list it.

Once you have the name of an event session in T-SQL, it is only a little more work to identify the events being monitored. This is true for a trace or an event session, although the Extended Events catalog views and DMVs are nicer to work with using T-SQL.

SELECT s.name, t.execution_count, e.event_name 
FROM sys.dm_xe_sessions AS s
JOIN sys.dm_xe_session_targets AS t
ON s.address = t.event_session_address
JOIN sys.dm_xe_session_events AS e
ON t.event_session_address = e.event_session_address
WHERE s.Name = 'event session name'

for events target tables -vs- the functional APPLY join needed to read the he trace source

SELECT 'Default Trace' 
     , COUNT(*) AS execution_count
     , ISNULL(  e.name 
             , 'unmapped eventid ' 
             + CAST( trc.EventClass AS NVARCHAR(11) ) ) 
FROM (SELECT id
           , LEFT( [path]
                 , LEN( [path] ) 
                 - PATINDEX( '%\%', REVERSE( [path] ) ) )  
            + N'\log.trc' AS tracefile 
      FROM sys.traces
      WHERE is_default = 1 ) AS t
CROSS APPLY ::fn_trace_gettable( tracefile, DEFAULT) trc
LEFT JOIN sys.trace_events e 
ON e.trace_event_id = trc.EventClass
GROUP BY trc.EventClass, e.name;

Applications that rely upon T-SQL encryption and also carry a mandate to protect sensitive data from snoopers with elevated authority are in a tough spot. Especially these days as major holes are being poked in on-the-wire encryption. Adding complexity to the application – more complexity than has already been added with encryption is the only way I know of to achieve obfuscation.

A look at what it takes to submit a stored procedure using SSMS in SQLCMD mode will illustrate the difficulty. SQLCMD mode buys a few readable variable token literals ( e.g wrapped in $() ) in place of hard values. It also allows us to enter secrets into SQL Server that are unseen by any SQL Server event-ing subsystem.

DECLARE @BackupPhrase AS VARBINARY(8000); 
DECLARE @KeyPhrase AS VARBINARY(8000); 

CREATE SYMMETRIC KEY $(SESSION_SYMMETRIC_KEY)
WITH ALGORITHM = AES_256
   , KEY_SOURCE = '$(SESSION_KEY_SOURCE)’
   , IDENTITY_VALUE = '$(SESSION_KEY_IDENTITY)’
ENCRYPTION BY PASSWORD = '$(SESSION_KEY_ENCRYPTION_PHRASE)';

OPEN SYMMETRIC KEY $(SESSION_SYMMETRIC_KEY)
DECRYPTION BY PASSWORD = '$(SESSION_KEY_ENCRYPTION_PHRASE)';

SET @BackupPhrase = (SELECT ENCRYPTBYKEY( KEY_GUID('$(SESSION_SYMMETRIC_KEY)' )
                                        , CAST( '$(master_DMK_BACKUP_PHRASE)' AS NVARCHAR(128) ) ) );
SET @KeyPhrase = (SELECT ENCRYPTBYKEY( KEY_GUID('$(SESSION_SYMMETRIC_KEY)' )
                                     , CAST( '$(master_DMK_ENCRYPTION_PHRASE)' AS NVARCHAR(128) ) ) );

EXEC $(EHA_SCHEMA).BackupDatabaseMasterKey @DbName = N'master'
                                         , @BackupPhrase = @BackupPhrase
                                         , @KeyPhrase = @KeyPhrase;

It is not really much more than an interesting aside, but all you have to do to keep SQLCMD variable assignment out of the trace output is to make sure the assignments are in a batch that does nothing. Put only the :setvar statements between two GO lines in the script. SQL Trace will always omit the no-op. Without this batch isolation, secrets may be captured in the trace output in clear text.

The CREATE SYMMETRIC KEY DDL password will be obfuscated. The assignment of cipher text values to the T-SQL variables will be obfuscated because the ENCRYPTYBYKEY function is called in the assignment. The passed phrases are binary cipher text.

DECLARE @BackupPhrase AS VARBINARY(8000); 
DECLARE @KeyPhrase AS VARBINARY(8000); 

--*CREATE SYMMETRIC KEY------------------
------------------------
-----------------------------------------------------
--------------------------------------------------

--*OPEN SYMMETRIC KEY------------------
--------------------------------------------------

--*SELECT------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
--*SELECT---------------------------------------------------------------
--------------------------------------------------------------------------------------------
EXEC eha.BackupDatabaseMasterKey @DbName = N'master'
                                         , @BackupPhrase = @BackupPhrase
                                         , @KeyPhrase = @KeyPhrase;

And like this from the Extended Event subsystems perspective:

[CDATA[DECLARE @BackupPhrase AS VARBINARY(8000);

DECLARE @KeyPhrase AS VARBINARY(8000); 

--*CREATE SYMMETRIC KEY------------------
------------------------
-----------------------------------------------------
--------------------------------------------------

--*OPEN SYMMETRIC KEY------------------
--------------------------------------------------

--*SELECT------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
--*SELECT---------------------------------------------------------------
--------------------------------------------------------------------------------------------
EXEC eha.BackupDatabaseMasterKey @DbName = N'master'
                                         , @BackupPhrase = @BackupPhrase
                                         , @KeyPhrase = @KeyPhrase;
]]>

Hey! What happened to <value>*encrypt—————–/~/—?

The symmetric key is a # temp key. That guarantees that the key will disappear when user connection is closed – and never existed before the user connection was opened. No values will be stored scrambled by this key. Its purpose is to obfuscate sensitive values passed to/between sql_modules. The encryption will be handed off to a permanent key dedicated to encrypting phrases in the database before the data is persisted.

Extended Events and SQL Trace produce identical batch obfuscation but produce different obfuscation at the statement level, highlighting the importance of evaluating all SQL Server event subsystems to prevent obfuscation leaks. 

Trying to manage obfuscation is a tad too obscure. I suspect most bottom liners will take the risk over the work. And the leaks will continue.

Posted in Encryption Hierarchies, Monitoring, Secure Data | Leave a comment

T-SQL Cryptographic Patterns – part 6: a hole in the bucket

Continuing with a description of the patterns introduced previously as the dog food stored procedure; and after the proc has completed a run-time inquisition to verify the authenticity and authority of the application’s actions in the booking phase the session and has won the cryptographic keys to the kingdom by running the gauntlet, the real work of the procedure can finally begin.

(Note that the SpokeInstall.sql script located in my github public repository may be easier to read or use than the in-line code below – be prepared for 8000 lines of T-SQL in the script. Simply search for the stored procedure name: BackupServiceMasterKey. Another option would be to refer back the somewhat simplified version of the procedure shown in the “dog food” post mentioned above.)

The first real work task involving cryptography is the generation of a fully qualified backup file name as cipher text. Of course, this assumes that there is business consensus that the backup file, and therefore the file path & name stored in the database, represent sensitive data  and malicious use of the backup file name could potentially compromise any data obfuscated under the SQL instance Service Master Key’s Windows Data Protection rooted SQL Server Encryption Hierarchy. If a malicious user can access the Service Master Key then even TDE protected backups may be at risk. Even authorized use must not expose the path & name as clear text that can be queried, traced, sniffed, probed, intercepted or otherwise read from the database file.The “pattern” description begins with a  few lines from the BackupServiceMasterKey stored procedure for generator functions to create values for the VARBINARY(8000) typed @BackupName and @BackupPath local variables along with a function to move a deterministic salted hash of the @BackupName‘s clear text value into the INT typed @BackupNameBucket variable. As was the case with the running of the gauntlet and will continue to be case as we traverse the sample procedure, the cryptographic operations are interleaved with authentication, authorization and business purposed application operations. The bucket, for example, will subsequently be used as an indexed access path for look up or reference without exposition of the clear text file name. The bucket will probably uniquely identify a row, but this is not guaranteed or important in this schema because we expect that the value will always be decoded after the bucket match to establish equality. The schema will support duplicate names, however the file system will not. Due to this potentiality, the bucket hash will never be used for any purpose other than the bucket match test. There will always be that little hole in the bucket that requires access to the deciphered name in order to assure value equality and.or uniqueness once the desired bucket is located.

  SET @BackupName = $(EHA_SCHEMA).NewMasterKeyBackupName( @NodeName );
  SELECT @BackupNameBucket = $(EHA_SCHEMA).AddSalt( '$(EHDB)'
                                                  , '$(EHA_SCHEMA)'
                                                  , '$(BACKUP_ACTIVITY_TABLE)'
                                                  , 'BackupNameBucket'
                                                  , Word )
  FROM ( SELECT CAST( DECRYPTBYKEY( @BackupName ) AS NVARCHAR(128) ) AS Word ) AS derived;
  SET @BackupPath = $(EHA_SCHEMA).BackupPath(@DbName);

All generator functions in the example return sensitive data as cipher text to protect the meaningful data from snoops. T-SQL functions cannot open keys and certificates but they can encrypt and decrypt. The keys and certificates required must have already been opened in the current session. Furthermore, the  core procs of the application from which the example is drawn – e.g., those white-listed in the running the gauntlet section of the Booking stored procedure – only accept sensitive values as cipher text. Encryption at the command line is required to prevent access to clear text both in memory – and on-the-wire if SSL is not properly configure. Consider that BackupServiceMasterKey requires a VARBINARY(8000) input parameter defining the backup file encryption phrase. To be sure, the interactions of the T-SQL login, the database user, the connection, the white-listing caller procs, and the generator functions is an interesting overarching security pattern. Here we will instead traverse the inner most cryptographic layer embodied by a few lines of code from BackupServiceMasterKey proc that securely assign a T-SQL generated and T-SQL obfuscated fully qualified file name to local variables in the proc that can be passed or stored without revealing the clear text.

Using the bucket hash authorized users can match the backup file in the file system to the logged backup activity row without the need to decode the column – at least not in every row of the table – each time a backup is completed. @BackupNameBucket will be indexed when persisted and can subsequently be used as a foreign key reference and as a look-up index key.

From an application functionality perspective, the bucket hash can optionally be used as the name of the file in the file system rather than using the clear text of the name generated. This will increase protection at the file system but reduce the obfuscation between the file system object and the database. The name is stored in the database only as cipher text preventing anyone browsing the file system from inferring the file content by the name when the hash is used as the name. Within the schema, a unique index creates a preferred error handling scenario, but only if the insert happens before a duplicate file name confounds a T-SQL mastered key or certificate export script/object.

(That highlights something of a special challenge for this application not really germane to the pattern under discussion: the export of a key or certificate to file cannot be rolled back in a T-SQL user transaction. Transactions are still important and valid, but it is also valid to pay attention to the order of work steps to prevent a file from being created by the execution of the stored procedure that later fails and rolls back all evidence of the now orphan export in the database.)

In this sample the values assigned to local variables could as easily have been generated in-line in a DML statement or a dynamic DDL statement without using local variables. That too would help protect exposure from tracing or snooping the execution context. In this stored procedure the values are needed more than once during execution so I am setting myself up for the need to decrypt the value more than once in the same proc! The @BackupName cipher text is decrypted first to hash a bucket value as shown above, then again to build a dynamic DDL key backup statement as in:

  SELECT @BackupDDL = FORMATMESSAGE ( $(MESSAGE_OFFSET)25
       , CAST( DECRYPTBYKEY( @BackupPath
                           , 1
                           , @DbName ) AS NVARCHAR(1024) )
       , CASE WHEN @UseHash = 1
              THEN CAST( @BackupNameBucket AS NVARCHAR(448) )
              ELSE CAST( DECRYPTBYKEY( @BackupName ) AS NVARCHAR(448) )
              END
       , '$(MASTER_KEY_BACKUP_EXT)'
       , CAST( DECRYPTBYKEY( Value
                           , 1
                           , CAST( DECRYPTBYKEY(Name) AS NVARCHAR(448) ) ) AS NVARCHAR(128) ) )
  FROM @tvp;

where (MESSAGE_OFFSET)25 message_id has been populated in sys.messages with:

  EXEC sp_addmessage $(MESSAGE_OFFSET)25
                    , 1
                    , 'USE master; BACKUP SERVICE MASTER KEY TO FILE = ''%s%s%s'' ENCRYPTION BY PASSWORD = ''%s'''
                    , 'us_english'
                    , 'FALSE'
                    , 'replace';

By populating and passing the variable(s) as cipher text the assignment operation and the dynamically executed DDL statement are obfuscated in SQL Trace and monitored Extended Events. The clear text is never placed into a memory allocated variable – even when stashed in the variable registers within the SPID’s execution context.

I reckon it is possible that the clear text values could be mined from registers as they tick through the CPU during assignment – or more likely as as they are entered by the user. I mention this to highlight the reality that T-SQL cryptography must be part of a multi-faceted network+host+sql+social security strategy. Otherwise cryptographic obfuscation is merely an inconvenience to the malicious user.

Relying on any single mechanism to protect data is a slippery slope. It is important to recognize that CPU register mining – or whatever the latest hacker innovation – will happen, and will be exploited for honest and dishonest purposes and will become a necessary security consideration as exploits enabled by the innovation(s) become uncomfortably embarrassing.

It would be less resource intensive to wait to encrypt the value when inserted but also exposes the clear text value in the execution context buffers. Event engines (e.g., SQL Trace, DMVs, Alerts, Event Notifications, Extended Events, SQL Audit), Showplan, and a myriad of low level sniffers, process browsers and debuggers can snoop around in the execution context. Granted: elevated authority is needed for most of the exploits. If the organization fully trusts all personnel with elevated authority that will ever access the server or VM and is also confident that the authority hierarchy will not be breached then the organization may see no reason to worry about what can be seen in memory. Hope, luck and uninteresting data are that will protect that organization from going to the fire with an empty bucket.

File naming business rules are embedded within the NewMasterKeyBackupName function. By rule a representation of the backup time is concatenated to an otherwise deterministic literal. In the case of a Service Master Key backup, the descriptor will include the SQL instance name, the literal “ServiceMasterKey” and the date & time stamp. The value will be encrypted in the statement where generated. The time precision makes for a mighty small windows for a malicious user attempting to anticipate the value. However, knowledge of the logic in the function – or a good long contemplative look at the un-encrypted columns in the table might yet reveal enough information for a relatively precise dictionary attack vector to a trained eye.

CREATE FUNCTION $(EHA_SCHEMA).NewMasterKeyBackupName
  ( @DbName NVARCHAR(128) = NULL )
RETURNS VARBINARY(8000)
$(WITH_OPTIONS)
AS
BEGIN
  DECLARE @NewName VARBINARY(8000);
  WHILE $(EHA_SCHEMA).CheckFile( @NewName ) <> 1
  SET @NewName = 
    ( SELECT ENCRYPTBYKEY( KEY_GUID('$(FILE_SYMMETRIC_KEY)')
                                   , CAST ( REPLACE(@@SERVERNAME,'\','$') + '__' 
                                   + ISNULL ( @DbName + '__MasterKey' , 'ServiceMasterKey' ) + '__' 
                                   + FORMAT( SYSUTCDATETIME(), 'yyyyMMddHHmmssfffffff') AS NVARCHAR(448) ) ) 
      FROM sys.certificates AS c
      JOIN sys.crypt_properties AS cp
        ON c.thumbprint = cp.thumbprint
      WHERE c.name = '$(OBJECT_CERTIFICATE)'
      AND c.pvt_key_encryption_type = 'PW'
      AND cp.major_id = @@PROCID
      AND @@NESTLEVEL > 1
      -- @DBName can be null here
      AND IS_OBJECTSIGNED( 'OBJECT' , @@PROCID, 'CERTIFICATE', c.thumbprint ) = 1
      AND IS_ROLEMEMBER ( '$(EHADMIN_ROLE)', SYSTEM_USER ) = 1
      AND SYSTEM_USER = ORIGINAL_LOGIN() );
  RETURN ( @NewName ) );
END
GO

If the FILE_CERTIFICATE that encrypts the FILE_SYMMETRIC_KEY is already open the cipher text of the generated name is returned by the function. If the certificate is not open or the name fails the validation of the CheckFile function, NewMasterKeyBackupName returns NULL.

The WHILE loop within the NewMasterKeyBackupName function works proactively to control access to sensitive information and at the same time prevent DRI and naming exception execution halts.   The CheckFile function in the WHILE loop verifies the schema integrity, the T-SQL call stack depth, the user’s authenticity, the uniqueness of the name and the absence of selected SQL Injection substrings within the name.

CREATE FUNCTION $(EHA_SCHEMA).CheckFile
  ( @Name VARBINARY(8000) )
RETURNS BIT
$(WITH_OPTIONS)
AS
BEGIN
  RETURN ( SELECT IIF (ba.Existing = 0, TRUE, FALSE)
           FROM sys.certificates AS c
           JOIN sys.crypt_properties AS cp
             ON c.thumbprint = cp.thumbprint
           OUTER APPLY ( SELECT COUNT(*) AS [Existing]
           FROM $(EHA_SCHEMA).$(BACKUP_ACTIVITY_TABLE)
           WHERE ba.BackupNameBucket = $(EHA_SCHEMA).AddSalt( '$(EHDB)'
                                                            , '$(EHA_SCHEMA)'
                                                            , '$(BACKUP_ACTIVITY_TABLE)'
                                                            , 'BackupNameBucket'
                                                            , DECRYPTBYKEY( @Name ) ) AS ba
           AND c.name = '$(OBJECT_CERTIFICATE)'
           AND c.pvt_key_encryption_type = 'PW'
           AND cp.major_id = @@PROCID
           AND @@NESTLEVEL > 1
           AND @Name IS NOT NULL
           AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
           AND IS_ROLEMEMBER ( '$(EHADMIN_ROLE)', SYSTEM_USER ) = 1
           AND SYSTEM_USER = ORIGINAL_LOGIN()
           AND ( PATINDEX('%[#,.;:"'']%', @Name)
               + PATINDEX('%--%', @Name)
               + PATINDEX('%*/%', @Name)
               + PATINDEX('%/*%', @Name)
               + PATINDEX('%DROP%', @Name)
               + PATINDEX('%CREATE%', @Name)
               + PATINDEX('%SELECT%', @Name)
               + PATINDEX('%INSERT%', @Name)
               + PATINDEX('%UPDATE%', @Name)
               + PATINDEX('%DELETE%', @Name)
               + PATINDEX('%GRANT%', @Name)
               + PATINDEX('%ALTER%', @Name)
               + PATINDEX('%AUX%', @Name)
               + PATINDEX('%CLOCK$%', @Name)
               + PATINDEX('%COM1-8%', @Name)
               + PATINDEX('%CON%', @Name)
               + PATINDEX('%LPT1-8%', @Name)
               + PATINDEX('%NUL%', @Name)
               + PATINDEX('%PRN%', @Name) ) = 0 );
END
GO

To help prevent SQL injection that might be executed in SQL Server or perhaps even in the file system – since this is a file name – the injection filters include in CheckFile target both T-SQL and command line attack vectors.

The AddSalt function that we first saw invoked in the snippet we started with from BackupServiceMasterKey is again called in CheckFile. In BackupServiceMasterKey AddSalt produces a deterministic but randomized hash of the name that will be persisted. In CheckFile AddSalt re-produces the hash and compares it to the persisted value. This gives increased confidence that name has not been changed along the way.

CREATE FUNCTION $(EHA_SCHEMA).AddSalt
  ( @DbName NVARCHAR(128)
  , @SchemaName NVARCHAR(128)
  , @TableName NVARCHAR(128)
  , @ColumnName NVARCHAR(128)
  , @Word NVARCHAR(128) )
RETURNS INT
$(WITH_OPTIONS)
AS
BEGIN
  DECLARE @SaltName NVARCHAR(443) = FORMATMESSAGE( $(MESSAGE_OFFSET)01
                                                 , @DbName
                                                 , @SchemaName
                                                 , @TableName
                                                 , @ColumnName);
   RETURN (SELECT ABS( CHECKSUM( HASHBYTES( '$(HASHBYTES_ALGORITHM)'
                                          , @Word + CAST( DECRYPTBYKEY( Value
                                                                      , 1
                                                                      , @SaltName ) AS NVARCHAR(128) ) )))
           FROM ( SELECT Id, Name, Value
                  FROM $(EHA_SCHEMA).$(NAMEVALUES_TABLE)
                  WHERE NameBucket = ABS( CHECKSUM( HASHBYTES( '$(HASHBYTES_ALGORITHM)'
                                                             , RIGHT( @SaltName, FLOOR( LEN( @SaltName ) / 2 ))))) 
                ) AS derived );
END
GO

Not only must the key used to encrypt the Value column be opened for this function to return anything but NULL, there is also no way to tell from looking at the function which key it is that must be opened.

In contrast to NewMasterKeyBackupName, the BackupPath function is deterministic. The BackupPath function verifies that the file encryption key is opened and that the function is signed by the schema’s signing certificate among the several security validations completed before returning the encrypted path.

BackupPath  is the only T-SQL place that must be changed to use a different path. Hardcoding may expose the path to even high level sniffing (not just trace and extended events but also ad hoc catalog queries) if the function is not created WITH ENCRYPTION.

CREATE FUNCTION $(EHA_SCHEMA).BackupPath
  ( @DbName NVARCHAR(128) )
RETURNS VARBINARY(8000)
$(WITH_OPTIONS)
AS
BEGIN
  RETURN ( SELECT ENCRYPTBYKEY( KEY_GUID( '$(FILE_SYMMETRIC_KEY)' )
                                        , CAST( '$(VHD_LETTER):\' AS NVARCHAR(1024) )
                                        , 1
                                        , @DbName )
           FROM sys.certificates AS c
           JOIN sys.crypt_properties AS cp
           ON c.thumbprint = cp.thumbprint
           CROSS JOIN ( SELECT TOP(1) KeyGuid
                        FROM $(EH_SCHEMA).$(BOOKINGS_TABLE)
                        ORDER BY RecCreatedDt DESC, Id DESC ) b
           WHERE b.KeyGuid = KEY_GUID( '$(SESSION_SYMMETRIC_KEY)' )
           AND c.name = '$(OBJECT_CERTIFICATE)'
           AND c.pvt_key_encryption_type = 'PW'
           AND cp.major_id = @@PROCID
           AND @@NESTLEVEL > 1
           AND DB_ID(@DBName) IS NOT NULL
           AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
           AND IS_ROLEMEMBER ( '$(EHADMIN_ROLE)', SYSTEM_USER ) = 1
           AND SYSTEM_USER = ORIGINAL_LOGIN() ) ;
END
GO

Even when a SQL object is obfuscated by WITH ENCRYPTION the content – in the case of this function including the backup path  – is visible in the database file and by querying the catalog through the Dedicated Admin Connection (DAC). If those security holes must be plugged, storing the value as a cipher text name value pair and deciphering it into the result may be more secure. As far as I can tell, legitimate use of the DAC includes the need to view system encrypted T-SQL objects to find out what is going wrong so probably better to not plug that hole. An encrypted file system drive and/or Transparent Data Encryption (TDE) coupled with hardened administrative access control can better plug the direct file access hole.

Perhaps the main take-away from the pattern described is that protecting a single piece of sensitive data – be that a password or a file path is no simple matter if the vulnerabilities created by elevated access are genuinely considered. Treating a complex and data rich application such as SQL Server as an endpoint is a flashing neon sign inviting malicious use.

anything below is advertising stuffed into the page by wordpress

Posted in Code Review, Encryption Hierarchies, Secure Data | Leave a comment

T-SQL Cryptographic Patterns – part 5: running the gauntlet

I’d like to revisit the sample script for the store procedure first pasted into part 3 with the focus on the patterns demonstrated. That script is taken from an administrative utility whose business purpose is to offer forward recovery and administrative key/phrase rotation services for any and all T-SQL Encryption Hierarchies. Readers responsible for a SQL Server with Transparent Data Encryption enabled and/or T-SQL certificates or keys deployed to a live environment are in need of such functionality.

A utility is helpful to standardize and automate such processing yielding a higher probability that key recovery and rotation will succeed.

Recall that the purpose of the sample stored procedure is to export the SQL Server’s Service Master Key to a backup file. The cleverly name sample procedure, BackupServiceMasterKey is a method from a utility schema designed to:

  1. securely store secrets
  2. export T-SQL master key and certificate backups to local files
  3. securely store copies of the exports offsite

Secrets are encrypted and cataloged by purpose and version.  Secrets then are available on demand for authorized admins to:

  • open the utilities encryption hierarchies
  • recall as needed to encrypt or decrypt export files
  • restore or (re) create existing T-SQL cryptographic artifacts if necessary
  • allow automation that assures the integrity of the secure offsite location to authenticate  in the distributed key backup environment
  • non-repudiation
  • audit

In part 3 I basically threw the sample procedure out there. I envisioned using the script as a device for describing patterns and I guessed openly that the script could also be used by the reader for a quick T-SQL cryptography self-assessment. At a least I figured I could expect any reader that spent a moment or two reviewing that script to be more aware of T-SQL cryptography in their world going forward. Now I must push for more depth. If in part 3 I succeeded in getting anyone to pay just a bit more attention to T-SQL cryptography, now I hope to enable T-SQL cryptography as a comfortably usable tool.  With this aim, the coming posts offer something of a walk-through narrative of the sample. Notice that code may change so perhaps best to find the procedure in the github project (see “.git my source” in the menu) for the most recent version.

An added complexity when using real-world applications to model concepts is that the patterns tend to be woven one into another and into other application work flow into a fabric of interleaved patterns in the real world.

A facetious (academic?) example is often better when describing a fundamental concept or syntax. Such examples can be adjusted to more clearly convey a concept or practice. Books Online’s Transact-SQL Reference, for instance, is where I go to see an example and to get the right syntax. In lazy mode I will often just go to the bottom of the BOL page and use the first example as a syntax template to get me started.

An industrial sample includes subtleties of the implementation use case. The industrial sample is far more likely to convey a measure of performance and usability. This is owed in large part to the architectural fabric. However, without an adequate understanding of what the industrial application does, the complexities can be distracting when used as an example, especially when the sample is not germane to the reader’s working environment. Because the purpose of this sample is to provide best practice backup and recovery services for the T-SQL encryption hierarchy, a useful relevance to all environments is anticipated.

The schema of the sample is (partially) insulated from SQL Server permission’s hierarchies with a phrase only encrypted database Master Key. This allows for an audit enforced authentication layer that employs SQL Server database roles – more precisely rows from sys.database_rolemembers – to restrict which admins are allowed to access the data.

Encrypt DMK by phrase only (from InstallSpoke.sql):

-- see security comments in sp_control_dbmasterkey_password documention 
-- it is when a secret is passed to a procedure/function that they can sneak through
-- however in-line crypto-functions will always obfuscate in the SQL Trace
IF NOT EXISTS (SELECT * 
               FROM sys.symmetric_keys 
               WHERE symmetric_key_id = 101)
  CREATE MASTER KEY ENCRYPTION BY PASSWORD = '$(EHDB_DMK_ENCRYPTION_PHRASE)';
GO
-- remove SMK encryption of the DMK - phrase encryption only
OPEN MASTER KEY DECRYPTION BY PASSWORD = '$(EHDB_DMK_ENCRYPTION_PHRASE)'
GO
ALTER MASTER KEY DROP ENCRYPTION BY SERVICE MASTER KEY; 
GO
ALTER MASTER KEY REGENERATE 
WITH ENCRYPTION BY PASSWORD = '$(EHDB_DMK_ENCRYPTION_PHRASE)';
GO
OPEN MASTER KEY 
DECRYPTION BY PASSWORD = '$(EHDB_DMK_ENCRYPTION_PHRASE)';
GO

This prevents any user without access to the EHDB_DMK_ENCRYPTION_PHRASE – regardless if connected to the database as dbo or guest or someone in between –  from viewing any information encrypted by the DMK rooted encryption hierarchy. The phrase must be provided to unscramble the cipher text. No elevated authority can get past that requirement. There is still that risk that the phrase will be exposed or leaked.

It is a given that encryption hierarchies rooted at respectable external authorities are theoretically more secure than SQL Server DPAPI or phrase rooted hierarchies. Storing the keys with the data is tantamount to leaving the key in the ignition and heading into the bar for a night of heavy drinking it is reasoned. Not so sure I agree. Nonetheless, it is true that Master Key and phrase rooted hierarchies – and I suppose even cars with the key in the ignition – are more accessible for malicious use and for authorized use. However, today the vast sea of data is at risk not because database cryptography is so blasted vulnerable or weak but because data is stored as clear text.

T-SQL cryptography is worthy of consideration. For most applications, the database probably is the best place for some of the cryptographic efforts to protect sensitive data. It is a defense-in-depth best practice to not put all your eggs in one basket. It is a cryptographic best practice not to use a key or certificate for more than one purpose. T-SQL cryptography can help to achieve such best practice objectives. The cost to develop and maintain database cell encryption using phrases and DPAPI rooted hierarchies is hard to beat. Data originated at the database is well suited for cryptographic protections applied at the database. Audit trails, logging tables, schema object modifications, database error messages and configuration changes often include sensitive data that could properly be protected by cell encryption at the database.

The Encryption Hierarchy Administration schema from which the sample procedure was extracted encapsulates all database cell cryptography. This is helpful for command-line and automation integration when the driver/caller process is not able to encrypt sensitive data appropriately. Many legacy applications written with stored procedures can be retrofitted with encryption of sensitive data with no or minimal changes to the application code.

The sample procedure is one of several wrapper procedures in the Encryption Hierarchy Administration schema that are systematically authenticated during execution. The procedure names are compiled into an a priori white-list that is maintained in one location: the body of a signed system encrypted stored procedure. Very important to create the procedure WITH ENCRYPTION to protect this white-list from view. All procedures in the white list acquire the authority and prove the authenticity to begin their primary application tasks by calling the stored procedure that contains the white list. Within the the called stored procedure is a conditional gauntlet that the process must navigate to re-assert the necessary authority and authenticity. This called stored procedure is named Book if you are interested to have a look in the InstallSpoke.sql. The name a reservation metaphor where valid and properly authorized users, and only when using white-listed and signed methods, book an intent to open phrase based encryption hierarchies – without actually providing the phrase – after running the gauntlet.

running the gauntlet in BackupServerMasterKey

  BEGIN TRY
    EXEC $(EHA_SCHEMA).OpenAuditKey @@PROCID, @SchemaVerified OUTPUT;
    SET @Parameters = ENCRYPTBYKEY( Key_GUID('$(AUDIT_SYMMETRIC_KEY)')
                                  , FORMATMESSAGE( '@DbName = ''%s''' 
                                               + ', @BackupPhrase = ''%s''' 
                                               + ', @KeyPhrase = ''%s''' 
                                               + ', @ForceNew = %d'
                                                 , @DbName
                                                 , @BackupPhrase
                                                 , @KeyPhrase
                                                 , @ForceNew )
                                  , 1
                                  , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NCHAR(36) ) );
    CLOSE SYMMETRIC KEY [$(AUDIT_SYMMETRIC_KEY)];
    EXEC $(EHA_SCHEMA).Book @@PROCID
                          , @Parameters
                          , @SchemaVerified
                          , @Id OUTPUT
                          , @MAC OUTPUT; 
    IF NOT EXISTS ( SELECT * FROM $(EHA_SCHEMA).$(BOOKINGS_TABLE) 
                    WHERE Id = @Id
                    AND KeyGuid = CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NCHAR(36) )
                    AND ObjectName = OBJECT_NAME(@@PROCID) 
                    AND VERIFYSIGNEDBYCERT( CERT_ID('$(AUTHENTICITY_CERTIFICATE)')
                                          , CAST(CHECKSUM( Id
                                                         , @@PROCID   
                                                         , ObjectName
                                                         , @Parameters
                                                         , KeyGuid
                                                         , Status ) AS NVARCHAR(128) )
                                          , @MAC ) = 1 ) 
      RAISERROR($(MESSAGE_OFFSET)34,16,1,@@PROCID,@Id);
    IF DB_ID(@DbName) IS NULL
      RAISERROR($(MESSAGE_OFFSET)35,16,1,'Database', 'database not found');
    -- booking complete, keys open

Notice that the communication between the white-listed procedure, here BackupServiceMasterKey and the called procedure, Book is authenticated by verifying the signed schema objects in the calling white-listed procedures before the Book procedure is called and then again when control is returned to the caller. Not seen here is the third, and most hardened, schema authentication that occurs in between – within the Book procedure. (more on that later…)

The complete battery – a few hundred lines of T-SQL in the Book stored procedure – includes:

  • Verify that no objects in the schema have changed since signed at create time
  • Verify that no objects are added or removed from the schema
  • Verify that the sys.messages rows added by the application at install are unchanged
  • Check for unexpected transactions to thwart padding oracle and error message mining exploits
  • Check for unexpected notifications, logging, events and audits, alerts and CLR objects that can leak clear text values
  • Verify that the currently logged on user is the ORIGINAL_LOGIN() and is authorized to use the schema. Authorization is enforced for all users including sysadmin and dbo
  • Validate all expected certificates and the “portable” symmetric key used by the utility to render error message viewable elsewhere in the data center
  • Verify that the caller is in the white list
  • Look for specific SQL Instance configuration option states

Both the caller and called procedure contribute ingredients for a hashed message authentication code (@MAC). The @MAC will also be saved to an activity history log row that is foreign keyed to the booking table when booked – or failed to book. Unauthorized requests are captured by the booking system before they are shown the trap door: a little T-SQL honeypot if you will.

The Book stored procedure from InstallSPoke.sql

-------------------------------------------------------------------------------
--    bwunder at yahoo dot com
--    Desc: authentication gauntlet    
--    ASSERT: the caller has opened the Database Master Key  
--    by rule the 0 row colophon is a checksum of sys.messages after install
--    add configuration filtering as appropriate. Be precise.
-------------------------------------------------------------------------------
CREATE PROCEDURE $(EHA_SCHEMA).Book 
  ( @ProcId INT 
  , @Parameters VARBINARY(8000)
  , @SchemaVerified BIT
  , @Id NCHAR(36) OUTPUT
  , @MAC VARBINARY(128) OUTPUT )
$(WITH_OPTIONS)
AS
BEGIN
DECLARE @Thumbprint VARBINARY(32)
      , @Reason NVARCHAR(30)
      , @ErrorData VARBINARY(8000);
  BEGIN TRY
    SET @Reason = 'objects';
    SET @Thumbprint = ( SELECT thumbprint
                        FROM sys.certificates 
                        WHERE name = '$(OBJECT_CERTIFICATE)' )
    -- maintain in lock step with schema changes and with CheckSig function
    -- by not using function here the scalar boolean function is verified 
    -- with minimal code, low cost to query sys.objects by id
    IF (SELECT @SchemaVerified                                        
        FROM sys.certificates c
        OUTER APPLY sys.fn_check_object_signatures ('CERTIFICATE', c.thumbprint) s
        INNER JOIN sys.objects o
        ON s.entity_id = o.object_id
        WHERE c.name = '$(OBJECT_CERTIFICATE)'
        AND c.pvt_key_encryption_type = 'PW'
        AND o.schema_id = SCHEMA_ID( '$(EHA_SCHEMA)' ) 
        AND (o.parent_object_id = 0 OR o.type = 'TR')       
        HAVING SUM(ISNULL(s.is_signature_valid,0) ) = $(OBJECT_COUNT) - $(TABLE_COUNT)  
        AND SUM ( CASE WHEN o.type = 'TR' THEN 1 ELSE 0 END ) = $(TABLE_COUNT) - 1 ) <> 1
      RAISERROR($(MESSAGE_OFFSET)30,16,1);
    SET @Reason = 'messages';
    -- text of any of our sys.messages is changed from the same value computed at install 
    -- in and stored in the colophon of the 0 row or the 0 row does not exist
    IF ( SELECT CHECKSUM_AGG( BINARY_CHECKSUM(text) )  
          FROM sys.messages
          WHERE message_id between $(MESSAGE_OFFSET)00 AND $(MESSAGE_OFFSET)50 ) 
         <>
        ( SELECT Colophon
          FROM $(EHA_SCHEMA).$(BACKUP_ACTIVITY_TABLE)  
          WHERE Id = '00000000-0000-0000-0000-000000000000' ) 
      RAISERROR($(MESSAGE_OFFSET)30,16,1);
    SET @Reason = 'probe';
    IF XACT_STATE() <> 0
      RAISERROR($(MESSAGE_OFFSET)31,16,1);
    SET @Reason = 'bridge'; -- a check for unexpected data exposures 
    IF EXISTS ( SELECT * 
                FROM sys.event_notifications
                WHERE name <> '$(EVENT_NOTIFICATION)Db' )
    OR EXISTS ( SELECT * 
                FROM sys.server_event_notifications 
                WHERE name NOT IN ( '$(EVENT_NOTIFICATION)Srv', 'SQLClueDDLEventNotification' ) )
    OR EXISTS ( SELECT * 
                FROM sys.triggers 
                WHERE is_ms_shipped = 0 
                AND OBJECT_SCHEMA_NAME(object_id) = '$(EHA_SCHEMA)'
                AND ( name NOT IN ( 'trg_$(BOOKINGS_TABLE)'               -- eha trigger whitelist (also in list below) 
                                  , 'trg_$(NAMEVALUES_TABLE)'             -- here check for unexpected triggers
                                  , 'trg_$(NAMEVALUE_ACTIVITY_TABLE)'      -- below check that all are signed
                                  , 'trg_$(CONTAINERS_TABLE)'
                                  , 'trg_$(CONTAINER_ACTIVITY_TABLE)'
                                  , 'trg_$(BACKUP_ACTIVITY_TABLE)'
                                  , 'trg_$(NOTIFICATION_ACTIVITY_TABLE)'
                                  , 'trg_$(OFFSITE_ACTIVITY_TABLE)'
                                  , 'trg_$(REPORT_ACTIVITY_TABLE)'
                                  , 'trg_ddl_$(EHDB)'
                                  , 'trg_trg_$(EHDB)' ) 
                      OR IS_OBJECTSIGNED( 'OBJECT'
                                        , object_id
                                        , 'CERTIFICATE' 
                                        , @Thumbprint ) <> 1 ) ) 
    OR EXISTS (SELECT * FROM sys.server_triggers
               WHERE is_ms_shipped = 0) 
    -- all secrets should be obfuscated in any trace events but better safe than sorry
    -- the automagic obfuscation proved inadequate in masking secrets when passed as 
    -- clear text in user defined stored procedure parameters, thus secrets are passed
    -- only as encrypted parameters. The install script produced more than 192,000 trace 
    -- events with TextData that were scrutinized for secret leaks. None leaks identified. 
    OR EXISTS (SELECT * FROM sys.traces 
               WHERE is_default <> 1)
    -- SQLAudit EVENTDATA is supposed to be obfuscated automajically but better safe than sorry
    OR EXISTS (SELECT * FROM sys.dm_xe_sessions
               WHERE name NOT IN ( 'system_health', 'sp_server_diagnostics session' )
               AND name NOT LIKE 'ehaSchemaAudit%' )
    OR EXISTS (SELECT * FROM sys.database_audit_specifications
               WHERE name <> 'ehaSchemaAuditDbSpecs' )
    OR EXISTS (SELECT * FROM sys.server_audits
               WHERE name <> 'ehaSchemaAudit' )
    OR EXISTS (SELECT * FROM msdb.dbo.sysalerts
               WHERE Name NOT LIKE 'SQLClue:%') -- filter all valid config that gets caught in gauntlet 
    OR EXISTS (SELECT * FROM sys.assemblies 
               WHERE name NOT IN ( 'Microsoft.SqlServer.Types', 'EHAdmin' ) )          
      RAISERROR($(MESSAGE_OFFSET)32, 16, 1);
    SET @Reason = 'authority';
    -- role membership is not recognized 
    -- sysadmin is always dbo so catch dbo to catch both 
    IF NOT EXISTS ( SELECT * 
                    FROM sys.database_role_members
                    WHERE role_principal_id = USER_ID('$(EHADMIN_ROLE)')
                    AND SYSTEM_USER = ORIGINAL_LOGIN()
                    AND USER_NAME(member_principal_id) = SYSTEM_USER   
                    AND SESSION_USER = 'dbo' )
    AND USER_NAME() <> 'cdc'                 
      RAISERROR($(MESSAGE_OFFSET)35, 16, 1, 'USER', @Reason);
    SET @Reason = 'DMK';
    IF EXISTS (SELECT * FROM sys.master_key_passwords)
    OR EXISTS (SELECT * 
               FROM sys.symmetric_keys sk 
               JOIN sys.key_encryptions ke 
               ON sk.symmetric_key_id = ke.key_id 
               WHERE sk.name = '##MS_DatabaseMasterKey##'
               AND ke.crypt_type <> 'ESKP' )
      RAISERROR($(MESSAGE_OFFSET)35,16,1,'MASTER KEY', @Reason);
    SET @Reason = 'cert'
    IF (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(FILE_CERTIFICATE)') <> 'MK'
    OR (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(NAME_CERTIFICATE)') <> 'MK'
    OR (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(VALUE_CERTIFICATE)') <> 'MK'
    OR (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(OBJECT_CERTIFICATE)') <> 'PW'
    OR (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(AUTHENTICITY_CERTIFICATE)') <> 'MK'
    OR (SELECT pvt_key_encryption_type 
        FROM sys.certificates
        WHERE name = '$(AUDIT_CERTIFICATE)') <> 'PW'
    OR NOT EXISTS (SELECT * 
                   FROM sys.symmetric_keys sk 
                   JOIN sys.key_encryptions ke 
                   ON sk.symmetric_key_id = ke.key_id 
                   WHERE sk.name = '$(ERROR_SYMMETRIC_KEY)'
                   AND ke.crypt_type = 'ESKP' ) -- Encryption by Password
       RAISERROR($(MESSAGE_OFFSET)35,16,1,'cert', @Reason);
    SET @Reason = 'whitelist'; 
    IF ( OBJECT_NAME(@ProcId) NOT IN ( 'AddNameValue'
                                     , 'BackupCertificate' 
                                     , 'BackupContainer' 
                                     , 'BackupDatabaseMasterKey' 
                                     , 'BackupServiceMasterKey' 
                                     , 'CertificateBackupsByThumbprint' 
                                     , 'GetPortableSymmetricKey'
                                     , 'GetPrivateValue'
                                     , 'MakeSalt' 
                                     , 'OpenAuditKey'
                                     , 'RecallContainer' 
                                     , 'ReportActivityHistory' 
                                     , 'ReportErrors' 
                                     , 'ReportServerSummary'
                                     , 'RestoreCertificate' 
                                     , 'RestoreContainer' 
                                     , 'RestoreDatabaseMasterKey' 
                                     , 'RestoreServiceMasterKey' 
                                     , 'SavePortableSymmetricKey'
                                     , 'SavePrivateValue'
                                     , 'SendOffsiteCDC' 
                                     , 'SendOffsiteTC' 
                                     , 'SelectNameValue' 
                                     , 'ValidateNameValue' 
                                     , '$(EVENT_NOTIFICATION)Activation'
                                     , 'trg_$(BOOKINGS_TABLE)'
                                     , 'trg_$(CONTAINERS_TABLE)'
                                     , 'trg_$(CONTAINER_ACTIVITY_TABLE)'
                                     , 'trg_$(NAMEVALUES_TABLE)'
                                     , 'trg_$(NAMEVALUE_ACTIVITY_TABLE)'
                                     , 'trg_$(BACKUP_ACTIVITY_TABLE)'
                                     , 'trg_$(NOTIFICATION_ACTIVITY_TABLE)'
                                     , 'trg_$(OFFSITE_ACTIVITY_TABLE)'
                                     , 'trg_$(REPORT_ACTIVITY_TABLE)' ) )
    OR OBJECT_SCHEMA_NAME(@ProcId) <> '$(EHA_SCHEMA)' 
    OR (SELECT 1                                                  -- maintain in lock step with schema changes
        FROM sys.objects t                                        -- and with CheckSig function used by callers
        WHERE OBJECT_SCHEMA_NAME( t.object_id ) = '$(EHA_SCHEMA)' -- by not using the CheckSig function here the 
        AND (parent_object_id = 0 OR type = 'TR')                 -- validity of the function is assured.
        HAVING COUNT(*) = $(OBJECT_COUNT)
        AND SUM (IS_OBJECTSIGNED( 'OBJECT', t.object_id, 'CERTIFICATE', @Thumbprint ) )  = $(TABLE_COUNT)
        AND SUM ( CASE WHEN type = 'SN' THEN 1 ELSE 0 END ) = $(TABLE_COUNT)
        AND SUM ( CASE WHEN type = 'TR' THEN 1 ELSE 0 END ) = $(TABLE_COUNT) ) <> 1           
      RAISERROR($(MESSAGE_OFFSET)35,16,1,'@ProcId', @Reason);    
    SET @Reason = 'config';
    IF SESSIONPROPERTY('ANSI_PADDING') <> 1
    OR EXISTS ( SELECT * FROM sys.databases
                WHERE database_id = DB_ID()
                AND (   is_trustworthy_on = 1
                    OR is_db_chaining_on = 1
                    OR is_ansi_padding_on = 1) )                   
          RAISERROR($(MESSAGE_OFFSET)37,16,1);
    SET @Reason = 'insert';
    -- assure this is last insert until Id is fetched
    BEGIN TRANSACTION
      INSERT INTO $(EHA_SCHEMA).$(BOOKINGS_TABLE) WITH(TABLOCK, HOLDLOCK)
        ( ProcId
        , ObjectName
        , Parameters
        , KeyGuid
        , Status )
      VALUES ( @ProcId   
             , OBJECT_NAME(@ProcId)
             , @Parameters
             , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NCHAR(36) )
             , 'OK');
      SET @Reason = 'sign';
      SELECT @Id = Id 
           , @MAC = MAC
      FROM ( SELECT TOP(1) Id
                         , SignByCert( CERT_ID('$(AUTHENTICITY_CERTIFICATE)')
                                     , CAST(CHECKSUM( Id
                                                    , ProcId   
                                                    , ObjectName
                                                    , Parameters
                                                    , KeyGuid
                                                    , Status ) AS NVARCHAR(128) ) ) MAC
             FROM $(EHA_SCHEMA).$(BOOKINGS_TABLE)
             WHERE KeyGuid = CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NCHAR(36) ) 
             ORDER BY Id DESC ) AS derived;  
    COMMIT TRANSACTION;      
    SET @Reason = 'keys';
    -- First need for DMK - use DMK encrypted authenticity certificate to make the MAC 
    -- it remains open until the first of the @MAC is saved by caller or the session ends
    IF NOT EXISTS ( SELECT * FROM sys.openkeys
                    WHERE key_name = '##MS_DatabaseMasterKey##'
                    AND database_name = DB_NAME() )
      OPEN MASTER KEY DECRYPTION BY PASSWORD = '$(EHDB_DMK_ENCRYPTION_PHRASE)';
    -- open data column keys
    IF NOT EXISTS ( SELECT * FROM sys.openkeys
                    WHERE key_name = '$(FILE_SYMMETRIC_KEY)'
                    AND database_name = DB_NAME() )
      OPEN SYMMETRIC KEY [$(FILE_SYMMETRIC_KEY)]
      DECRYPTION BY CERTIFICATE [$(FILE_CERTIFICATE)];
    IF NOT EXISTS (SELECT * FROM sys.openkeys
                   WHERE key_name = '$(NAME_SYMMETRIC_KEY)')
      OPEN SYMMETRIC KEY [$(NAME_SYMMETRIC_KEY)]
      DECRYPTION BY CERTIFICATE [$(NAME_CERTIFICATE)];
    IF NOT EXISTS (SELECT * FROM sys.openkeys
                   WHERE key_name = '$(VALUE_SYMMETRIC_KEY)')
      OPEN SYMMETRIC KEY [$(VALUE_SYMMETRIC_KEY)]
      DECRYPTION BY CERTIFICATE [$(VALUE_CERTIFICATE)];
  END TRY
  BEGIN CATCH
    IF ERROR_NUMBER() <> $(MESSAGE_OFFSET)31
      ROLLBACK TRANSACTION;
    OPEN SYMMETRIC KEY $(ERROR_SYMMETRIC_KEY)
    DECRYPTION BY PASSWORD = '$(ERROR_KEY_ENCRYPTION_PHRASE)';
    -- handle insert or signing excptns without a dup key ( an incomplete booking )
    INSERT INTO $(EHA_SCHEMA).$(BOOKINGS_TABLE) 
      ( ProcId
      , ObjectName
      , Parameters
      , KeyGuid
      , Status
      , ErrorData )
    SELECT @ProcId
         , OBJECT_NAME(@ProcId)
         , @Parameters
         , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NCHAR(36) )
         , ISNULL(@Reason,'Error') 
         , ENCRYPTBYKEY( KEY_GUID('$(ERROR_SYMMETRIC_KEY)')
                       , ErrorInfo 
                       , 1
                       , @Id )
    FROM (SELECT FORMATMESSAGE( $(MESSAGE_OFFSET)02
                              , ERROR_NUMBER()
                              , ERROR_SEVERITY()
                              , ERROR_STATE()
                              , ISNULL(ERROR_PROCEDURE(), 'ad hoc')
                              , ERROR_LINE()
                              , ERROR_MESSAGE() ) AS ErrorInfo ) as derived;
    CLOSE SYMMETRIC KEY $(ERROR_SYMMETRIC_KEY);
    IF @@NESTLEVEL > 1
      THROW;
  END CATCH
END
GO
ADD SIGNATURE TO $(EHA_SCHEMA).Book 
BY CERTIFICATE $(OBJECT_CERTIFICATE)
WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
GO

Of course all this must happen quickly with minimal impact to system resources.  The system catalog and other memory resident SQL Server values are exploited as much as possible during the running of the gantlet. If a white list procedure survives the gauntlet – and I would expect this to be the rule not the exception for the white list procedures – the phrase encrypted keys and certificates needed to complete the primary task are opened and a row is inserted in a Bookings table. Without the Bookings row the primary operation will not be attempted. In all cases the attempt should be recorded.

The Bookings row includes a cipher text column containing the parameter values (@Parameters) passed to the white listed stored procedure. The parameters are encrypted into the cipher text column using a symmetric key that can be controlled by a person or group other than the DBA or a subgroup of DBAs. In fact the DBA can be excluded from using any key or the certificate used to encrypt the key into the database if desired. Just don’t give ’em the phrase. The certificate is encrypted into the database by a user supplied phrase. Note that any automated build processing must support and honor said separation of responsibilities.

Perhaps you noticed that the first operation in the sample stored procedure assigns a value to the @Parameters local variable? @Parameters is a VARBINARY(8000) cipher text value encrypted by the AUDIT_SYMMETRIC_KEY. The assignment operation demonstrates a fundamental T-SQL method for alignment of encryption key ownership and division of responsibility business rules. The value will be persisted in the Bookings row after the gauntlet is run within the Book procedure’s execution. Later, persons with knowledge of the phrase can open the @Parameters.Others that view the row will see only the cipher text.

Each white-listed procedure is tooled to collect it’s own audit trail in this way and commit a row to the Bookings table before completing a primary task. A symmetric key under the (optionally exclusive) control of an auditing authority i.e, Accounting, Legal, Security Officer, etc. and dedicated to encrypting passed parameters is used to protect the audit trail from DBAs and data readers alike.

This audit style also implicitly enables on demand identification of unauthorized ad hoc data modifications by the absence of audit trail entries to defend the current state of the data. Not the best in data access monitoring but a useful failsafe. Simply find the booking row when the value was last touched and decrypt the @parameters column. Should match what is in the data table.

Notice that the GUID of a session bound temporary symmetric key is used as the @authenticator of the symmetric encryption method to protect against substitution and dictionary attacks. Since that session bound key will disappear when the SPID disconnects from the database it is important to save the Key’s GUID near the @Parameter value for use during any subsequent requirement to review or verify the values passed.  Using the GUID bound to the session removes any hacking benefit from  guessing the next GUID value that NEWID() or NEWSEQUENTIALID() might spew. It also allows for the GUID to be reused in all of the few possible white list invocation in the session with a common value to give us a preferred blend of uniqueness and randomness from the @authenticator as discussed in part 4.

The AUDIT_SYMMETRIC_KEY is encrypted into the database by a certificate. In turn the certificate is encrypted into the database using a phrase. This establishes the certificate’s phrase as the root of the encryption hierarchy. A sysadmin or dbo can drop the key or change the phrase but they cannot see the encrypted data as clear text without that phrase. Someone might steal the phrase but would still need to be able to open the database Master Key assuming they were able to hijack the encrypted data or backup files!

a translucent symmetric key from InstallSpoke.sql

-- audit certificate is independent FROM DMK 
IF NOT EXISTS (SELECT * FROM sys.certificates WHERE name = '$(AUDIT_CERTIFICATE)') 
  CREATE CERTIFICATE $(AUDIT_CERTIFICATE) 
  ENCRYPTION BY PASSWORD = '$(AUDIT_CERTIFICATE_ENCRYPTION_PHRASE)' 
  WITH SUBJECT = 'Encryption Hierarchy Administrator Audit Trail'; 
-- audit key is sticky to the database 
IF NOT EXISTS (SELECT * FROM sys.symmetric_keys WHERE name = '$(AUDIT_SYMMETRIC_KEY)') 
  CREATE SYMMETRIC KEY $(AUDIT_SYMMETRIC_KEY) 
  WITH ALGORITHM = $(AUDIT_KEY_ENCRYPTION_ALGORITHM) 
  ENCRYPTION BY CERTIFICATE $(AUDIT_CERTIFICATE);

The configuration carries a formidable defense-in-depth capability. The symmetric key can only exist in the current database or a copy of the current database. The certificate can only be opened with it's encryption phrase. The symmetric key can only be used when the certificate is open and both are in the current database. When the cipher text is stored using TDE or file system encryption there is some overlap of protection – and consequently an on overlap of ownership boundaries – that requires at least a triangular cooperation among trusted resources to render the cipher text as clear text.

The OpenAuditKey stored procedure verifies the schema authenticity by checking that all objects are properly signed. If there is any discrepancy in the number of signed objects versus an expected count, any unsigned and unexpected objects in the schema or if the current user is not enrolled in the application, the OpenAuditKey procedure will return a value indicating a failure. The key will still be opened so that the audit record can be created before the white list procedure is later aborted – if @SchemaVerified is not 1 when the real work begins.

OpenAuditKey stored procedure from InstallSpoke.sql

CREATE PROCEDURE $(EHA_SCHEMA).OpenAuditKey
  ( @ProcId INT 
  , @SchemaVerified BIT OUTPUT )
$(WITH_OPTIONS)
AS
BEGIN
  IF @@NESTLEVEL > 1
    SET @SchemaVerified = $(EHA_SCHEMA).VerifySchema(@ProcId);
  OPEN SYMMETRIC KEY [$(AUDIT_SYMMETRIC_KEY)]
  DECRYPTION BY CERTIFICATE [$(AUDIT_CERTIFICATE)]
  WITH PASSWORD = '$(AUDIT_CERTIFICATE_ENCRYPTION_PHRASE)';
END
GO
ADD SIGNATURE TO $(EHA_SCHEMA).OpenAuditKey 
BY CERTIFICATE $(OBJECT_CERTIFICATE)
WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';

As a result, it does not matter even that the current user is a member of the sysadmin role or the dbo role or otherwise unprivileged. Without knowledge of the phrase the key cannot be used to un-cloak any cipher text.

The use of a called stored procedures to open the keys and certificates removes developer need to know from the phrases. The procedure also provides a convenient container that can be used for any key management, phrase rotations, division of responsibility and build processing.

The called procedure must be created using the WITH ENCRYPTION option in live environments. The goal of the verify is to perform the deepest check for evidence of unauthorized use possible without adding unnecessary CPU cycles or even necessarily failing the thread of execution. We already know that the cryptography will add CPU overhead. Here the signature verification for all objects in the schema queries the system catalog. This is intended to assure consistently lightweight and  speedy execution.

The VerifySchema function from InstallSpoke.sql

CREATE FUNCTION $(EHA_SCHEMA).VerifySchema 
  ( @ObjectId INT )
RETURNS BIT
$(WITH_OPTIONS)
AS
BEGIN 
  RETURN (SELECT CASE WHEN COUNT(*) = $(OBJECT_COUNT) - $(TABLE_COUNT) 
                      THEN 1 ELSE 0 END
          FROM sys.certificates c
          CROSS APPLY sys.fn_check_object_signatures ( 'certificate'
                                                     , c.thumbprint) s
          WHERE c.name = '$(OBJECT_CERTIFICATE)'
          AND c.pvt_key_encryption_type = 'PW'
          AND OBJECT_SCHEMA_NAME (entity_id) = '$(EHA_SCHEMA)'
          AND EXISTS (SELECT *              
                      FROM sys.database_role_members
                      WHERE role_principal_id = USER_ID( '$(EHADMIN_ROLE)' )
                      AND SYSTEM_USER = ORIGINAL_LOGIN()
                      AND USER_NAME(member_principal_id) = ORIGINAL_LOGIN() )
          AND s.is_signed = 1 
          AND s.is_signature_valid = 1 );    
END
GO
ADD SIGNATURE TO $(EHA_SCHEMA).VerifySchema 
BY CERTIFICATE $(OBJECT_CERTIFICATE)
WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
GO

Notice that the VerifySchema function relies upon hard-coded values provided at install. Hard-coding is normally undesirable because of resulting reduced maintainability. In this scenario the static values assure that no schema objects have been added, removed or modified in the schema since the procedure was created. Furthermore, the static filter expression assures that the certificate is signed by a phrase as opposed to the DMK or an external key manager. Without the certificate’s password and without grinding out another signature digest to compare to the existing signature, the function is able to elevate confidence that the current SPID is – or is not – a valid execution instance without failing the SPID. Keeping the invalid SPID alive allows the harvest of parameters and other in-row audit details (DateTime, UserId, etc.) and full control of what – if any – information will be returned to the caller.

Running the gauntlet is a pattern that involves cryptography rather than a cryptographic pattern. A pattern of authentication that is not bound to the SQL Server security hierarchy. A pattern of proactive schema authentication. A pattern to assure the integrity of important application events and sensitive data in the data store.

Not all data needs gauntlet level data protection and overhead. Only the stuff that is important enough to encrypt, sign or hash.

Posted in Encryption Hierarchies, Secure Data | Leave a comment