ad hoc T-SQL via TLS (SSL): Almost Perfect Forward Secrecy

The day the Heartbleed OpenSSL ‘vulnerability’ [do they mean backdoor?] hits the newswires seems an ideal moment to bring up an easy way to wrap your query results in an SSL tunnel between the database server and where ever you happen to be with what-ever device you happen to have available using node.js. (also see yesterday’s post re: making hay from you know what with node.js. And PLEASE consider this post as encouragement to urgently upgrade to OpenSSL 1.0.1g. Post haste!)

SQLbySSL

Heartbleed is clearly the disclosure of a probably intentional free swinging back door in open source software being poorly disguised as a vulnerability discovered after years in the wild. I’m afraid, “Oh, gee, I forgot to test that…” just doesn’t do it when you talking about OpenSLL. That just made every one of us that has been advocating for open source software as a pathway toward the restoration of secure computing and personal privacy into feckless dumb shits: as big o’ fools as those politicians from the apropos other party. A classic example of how the ‘good enough’ mentality demanded of software engineers is an unacceptable course for humanity.

In my estimation, the only lesson that will be learned by proprietary software vendors and the open source communities alike from the cardiac damage that OpenSSL is about to endure as a result of this little old bleeding heart will be to never admit anything. Ever. Some things never change.

OpenSSL just might not survive without the accountability that is established through full disclosure – at least about what really happened here but preferably across the board. Preferably a disclosure to provide compelling evidence that nothing else so sinister is yet being concealed. I doubt that can happen without full and immediate disclosure from every person involved in every design decision and test automation script implemented or used during the creation, development and community review of that software. And I doubt any software organization or community would be able to really come clean about this one because - and I admit this is opinion based only on how I have seen the world go ’round over the last 60 years - maybe even a community building foundational open source software such as OpenSSL can be ‘persuaded’ to submit to governmental demands and somehow also remain bound to an organizational silence on the matter?  Prepare yourselves for another doozy from grand pooh-bah – and real bad liar – of the NSA before all is said and done on this one.

May 9, 2014 - So far General Clapper has delivered as expected. On the tails of his April Fools Day admission of what we already knew: the NSA has conducted mass surveillance of  American Citizens without warrant or suspicion for quite a while, he first denied having ever exploited the OpenSSL buffer back door in a bald face lie that he stuck with for maybe a week or three, and now he is merely reiterating on older, but extremely disturbing, tactical right he has claimed before for the NSA to not reveal to even American and ally owners or to American and ally maintainers of open source code or hardware any exploitable bugs known by the NSA. All the owners and maintainers get to know about are the backdoors that were coerced to willingly implement. That is just plain outrageous. A standard for tyranny is established. I guess we should be at least glad that the pooh-bah has been willing to share his despotic rule – at least in public – with first “W” and then Bronco. Hell, Bronco even got us to believe that keeping the pooh-bah on his throne was a presidential decision. We will have to wait and see if he can tolerate Monica Bengazi I reckon.

I wonder if we will ever hear that admission of the ultimate obvious truth that the NSA is covertly responsible for the existence of the OpenSSL back door? This must scare the hell out of Clapper’s inner circle – whoever they might be? Once they are forced to admit the first backdoor it won’t be long before the other US Government mandated back doors to our privacy begin to surface and close. I have no doubt there will be a whole lot more colluding public corporations than just Microsoft, Apple and Google. I know it’s deep and ugly, but honestly have no idea just how deep and ugly. All I can see clearly is that there must be a good reason our Government has made such a big deal out of unrevealed backdoors planted for the Chinese Government in Hauwei’s network hardware…


I made the claim in the title that this technique is using ad hoc queries. That needs some qualification. Queries in the example code below are submitted asynchronously by a node.js https server running at the database server. The query is not exactly ad hoc because you must place it in a text file for use by the node.js https server before starting the node server then you can execute the query from any browser with an IP path to the node server. While there is always a way to get to the text file and edit the query if need be, the idea described here is more useful for those ad hoc queries you run a few times to keep an eye on something, then might never use again. The https server would only be of importance if there were sensitive data in the query results and you wished to avoid serving it on the network as clear text.

An OpenSSL generated key for self-signing else a CA signed certificate on the database server is required before starting node. You could install the key and certificate in the local key repository, but that is not the method used here. Instead, a key and a certificate signing request are generated with OpenSSL. The key and self signed cert are kept in node.js server’s root folder. You may need to ignore an “unable to write ‘random state’” message from OpenSSL from key(1) and cert(3) generation. Keep in mind that when using a self signed certificate you must also click thru a browser warning informing you that the certificate is not signed by certificate authority (CA).  Also keep in mind that anyone that can get your key and certificate can decipher a cached copy of any bits you shoved through SSL tunnels built with that key and certificate. Guard that key closely.

three ways to self signed certificate that will encrypt a TLS1.2 tunnel
1. prompt for file encryption phrases and distinguished name keys
  //genrsa and similar are superceeded by genpkey openssl genrsa -out key.pem 1024
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024
  openssl req -new -key key.pem -out request.csr
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

2. no prompts - your distinguished name (DN)
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024 -pass pass:keyFileSecret
  openssl req -new -key key.pem -passin pass:keyFileSecret -out request.csr -passout pass:certFileSecret -subj/DC=org/DC=YABVE/DC=users/UID=123456+CN=bwunder -multivalue-rdn
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

3. one command - no request file - no prompts
  openssl req -x509 -newkey rsa:1024 -keyout key.pem -out cert.pem -passin pass:keyFileSecret -passout pass:certFileSecret -days 1 -batch 

The key used to generate the request is used to sign the request certificate . Certificate and key are saved as .pem files in the node.js server folder. You could even roll-ur-own perfect forward secrecy. That is to say, automate the generate and signing of a new key before every request. Not quite perfect but this could allow you to keep going in ‘manual mode’ with or without an urgent upgrade to close a risk that is not considered a risk when using perfect forward secrecy – at least until perfect forward secrecy is rendered ineffective in a few years.

Adding the one command key generation as the “prestart” script in the node’s  package.json will get you a new key each time you start the nodejs server.

You may need to allow inbound TCP traffic on the port serving SSL pages (8124 in the example) in your firewall if you want to hit the query from your smart phone’s browser or any remote workstation that can ping the database server on the port assigned to the https server – and present your Windows domain credentials for authentication unless you hardcode a SQL login username/password in the connection string (not recommended).

Speaking of which, Edge expects to find a connection string to the SQL Server in an environment variable where the node.exe is called before the node process thread is started.

SET EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=tempdb;Integrated Security=True

Lastly, when the node server is started you will be prompted at the console to enter a PEM password. It is not clear from the prompt but this is the phrase you used to encrypt the certificate file. I used ‘certFileSecret’ in the example above.

Happy Heartbleed day!


/*
  npm install edge
  npm install edge-sql
*/
var edge =require('edge');
var sys =require('sys');
var https =require('https');
var fs =require('fs');

var port = 8124;
var options = {
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
};

var sqlQuery = edge.func('sql', function () {/*
  SELECT top(10) qs.total_worker_time AS [total worker time]
     , qs.total_worker_time/qs.execution_count AS [average worker time]
     , qs.execution_count AS [execution count]
     , REPLACE(
         SUBSTRING( st.text
                  , ( qs.statement_start_offset / 2 ) + 1
                  , ( ( CASE qs.statement_end_offset
                        WHEN -1 THEN DATALENGTH( st.text )
                        ELSE qs.statement_end_offset
                        END - qs.statement_start_offset ) / 2 ) + 1 )
         , CHAR(9)
         , SPACE(2) ) AS [query text]
  FROM sys.dm_exec_query_stats AS qs
  CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
  ORDER BY total_worker_time DESC;
*/});

// listens for query results to dump to a table
https.createServer( options, function( request, response ) {
  sys.log('request: ' + request.url );
  if ( request.url==='/' ) {
    sqlQuery( null, function( error, result ) {
      if ( error ) throw error;
      if ( result ) {
        response.writeHead( 200, {'Content-Type': 'text/html'} );
        response.write('<!--<span class="hiddenSpellError" pre="" data-mce-bogus="1"-->DOCTYPE html>');
        response.write('<html>');
        response.write('<head>');
        response.write('<title>SQLHawgs</title>');
        response.write('</head>');
        response.write('<body>');
        response.write('<table border="1">');
        if ( sys.isArray(result) )  {
          response.write('<tr>');
          Object.keys( result[0] ).forEach( function( key ) {
            response.write('<th>' + key + '</th>');
          });
          response.write('</tr>');
          result.forEach( function( row ) {
            response.write('<tr>')
            Object.keys( row ).forEach( function( key ) {
              if (typeof row[key]==='string'&&row[key].length >=40 ) {
                response.write('<textarea DISABLED>' + row[key] + '');
              }
              else {
                response.write('<td>' + row[key] + '</td>');
              }
            });
            response.write('</tr>');
          });
        }
        else  {
          Object.keys( result[0] ).forEach( function( key ) {
            response.write( '<tr><td>' + key + '</td><td>' + result[0][key] + '</tr>');
          });
        }
        response.write( '</table>' );
        response.write( '</body>' );
        response.write( '</html>' );
        response.end();
      }
      sys.log("rows returned " + result.length)
    });
  }
}).listen(port);

sys.log('listening for https requests on port ' + port);

Posted in Privacy, Secure Data | Leave a comment

Making JSON Hay out of SQL Server Data

Moving data in and out of a relational database is a relentless runtime bottleneck. I suspect you would even agree that effecting metadata change at the database is more disruptive than the run of the mill CRUD. I often hear the same [straw] arguments for a new cloud vendor or new hardware or new skill set or a code rewrite to relieve throughput bottlenecks. But what if what you really need is a new data store model? What if you have been building, and rebuilding, fancy databases as cheaply as possible on scenic oceanfront property beneath a high muddy bluff across the delta of a rushing river from a massive smoking volcano in the rain? That is to say, maybe an RDBMS is simply not quite the right place to put your data all of the time? Maybe… just maybe… SQL Server – and Oracle and PostgreSQL – are passé and the justifications for normalization of data are now but archaic specks disappearing into the vortex of the black hole that is Moore’s Law?

On the off chance that there is some truth to that notion I figure it behooves us to at least be aware of the alternatives as they gain some popularity. I personally enjoy trying new stuff. I prefer to take enough time examining a thing so that what I am doing with it makes sense to me. In late 2012 the open source MongoDB project caught my attention. I was almost immediately surprised by what I found. Intelligent sharding right out of the box for starters. And MongoDB could replicate and/or shard between a database instance running on Windows and a database instance running on Linux for instance, or Android or OSX [or Arduino or LONworks?]. And there was shard aware and array element aware b-tree indexing, and db.Collection.stats() - akin to SQL Server’s SHOWPLAN. Even shard aware Map-Reduce aggregations so the shards can be so easily and properly distributed across an HDFS – or intercontinental cluster for that matter - with ease. And tools to tune queries!  I was hooked in short order on the usability and the possibilities so I dug in to better understand the best thing since sliced bread.

The “mongo shell” – used for configuration, administration and ad hoc queries – lives on an exclusive diet of javascript. Equally easy to use API drivers are available from MongoDB.org for Python, Ruby, PHP, Scala, C, C++, C#, Java, Perl, Erlang and Haskell. There is more to the API than you find with the Windows Azure or AWS storage object, or Cassandra or SQLite for that matter, but still not as much complexity for the developer or waiting for results for the user as is invariably encountered with relational models.

In the course(s) of learning about the API and struggling to remembering all the things I never knew about javascript and the precious few things I never knew I knew about javascript I found myself working with – and schooling myself on - node.js (node). Node is a non-blocking single threaded workhorse suitable for administrative work and operational monitoring of servers, smartphones, switches, clouds and ‘the Internet of things‘.  The mongo shell is still the right tool for configuration, indexing,  testing and most janitoral grunt work at the database. Unlike node, the shell is not async by default and not all of the lowlevel power of the mongo shell is exposed through the APIs. Nonetheless, node uses the native javascript MongoDB API. And I must say that having the application and the db console in the same language and using the exact same data structures is huge for productivity. Minimal impedence for the DBA to force the mental shift between mongo server and node callbacks. Virtually no impedance for the developer to mentally shift between app layers and data layers!

Perhaps node.js is a seriously excellent cross platform, cross device administrative tool as I believe, but I can only guarantee that it is fun. It is an open source environment with potential beyond any I can imagine for Powershell or ssh. Packages that expose functionaity through javascript libraries and written in  C, C++, C#, Java and/or Python   exist for connection to node. node makes no bones that MongoDB is the preferred data store. I am by no means a data store connoisseur though I have snacked at the NoSQL corner store and stood in the Linux lunch line often enough to feel entitled to an opinion. You’ll have to take it from there.

FWIW: 10gen.com, MongoDB’s sponsor corp has a real good 6 lesson course on-line for developer’s that will give you a journeyman skills with MongoDB and get you started with node.js. Mostly hands on so there is a lot of homework. And its free.

To otherwise help ease your introduction – if you decide to kick the MongoDB tires using node.js and you are a SQL DBA – I am providing an example data migration from SQL Server into MongoDB that you can easily set up locally or modify to suit your data. Most of the work will be completing the download and installation of the few open source software libraries required.

For data here I use the simplified products table hierarchy from the AdventureWorksLT2012 sample database available from codeplex. product is easily recognizable as what I will call the unit of aggregation.

The unit of aggregation is all data that describes an atomic application object or entity. From the already abstracted relational perspective one could think of the unit of aggregation as everything about an entity de-normalized into one row in one table. In practice, many relational data models have already suffered this fate to one extent or another.      

In the AdventureWorksLT database I see three candidates for unit of aggregation: customers (4 tables), products (6 tables) and Sales (2 tables, parent-child – the child is probably the unit of aggregation). Product is interesting because there are nested arrays (1 to many relationships) and a grouping hierarchy (category). Here is a diagram of the SQL Server data:

ProductsDbDiagram

This is loaded to into a collection of JSON documents of products with the following JSON format. The value (right) side of each ‘name -value’ pair in the document indicates the table and column to be used from the SQL data.

[ 
  {
    _id : Product.ProductID,
    Name: Product.Name,
    ProductNumber: Product.ProductNumber,
    Color: Product.Color,
    StandardCost: Product.StandardCost,
    ListPrice: Product.ListPrice,
    Size: Product.Size,
    Weight: Product.Weight,
    SellStartDate: Product.SellStartDate,
    SellEndDate: Product.SellEndDate,
    DiscontinuedDate: Product.DiscontinuedDate,
    ThumbNailPhoto: Product.ThumbNailPhoto,
    ThumbNailPhotoFileName: Product.ThumbNailPhotoFileName,
    rowguid: Product.rowguid,	
    ModifiedDate: Product.ModifiedDate,
    category: 
      {
        ProductCategoryID: ProductCategory.ProductCategoryID,
        ParentProductCategoryID : ProductCategory.ParentProductCategoryID,
        Name: ProductCategory.Name,
        ModifiedDate: ProductCategory.ModifiedDate 	
      }
    model:
      {
        ProductModelID: ProductModel.ProductModelID,
        Name: ProductModel.Name,
        CatalogDescription: ProductModel.CatalogDescription ,
        ModifiedDate: ProductModel.ModifiedDate 	
        descrs: 
          [
            {
              ProductDescriptionID: ProductModel.ProductDescriptionID, 			
              Culture: ProductModelProductDescription.Culture,
              Description: ProductDescription.Description,
              ModifiedDate: ProductDescription.ModifiedDate 	
            }
            ,{n}... 
          ]
      }
   }
   ,{n}... 
 ] 

The code is executed from a command prompt with the /nodejs/ directory in the environment path . I am using node (0.10.25) on Windows Server 2012 with SQL Server 2012 SP1 Developer Edition at the default location and MongoDB 2.2.1 already installed prior to installing node. SQL Server is running as a service and mongod is running from a command prompt. I am using only Windows Authentication. For SQL Server access I am using the edge and edge-sql npm packages. edge asynchronously marshals T-SQL through the local NET framework libraries and returns JSON but only works with Windows.

edge-sql result sets come back to the javascript application as name-value pairs marshaled from a .NET ExpandoObject that looks and smells like JSON to me. The work left after the queries return results is merely to assemble the atomic data document from the pieces of relatioinal contention and shove it into a MongoDB collection. This all works great for now, but I am not totally convinced that edge will make the final cut. I will also warn you that if you decided to start adapting the script to another table hierarchy you are will be forced to also come to understand Closures and Scope in Javascript callbacks. I hope you do. It’s good stuff. Not very SQLish though.

/*
  npm install mongodb
  npm install edge
  npm install edge-sql

  edge expects valid SQLClient connection string in environment variable
  before node is started.
  EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=AdventureWorksLT;Integrated Security=True

  edge-sql is a home run when aiming SQL data at a JSON target because
  you just supply valid T-SQL and edge will return the ADO recordset
  as a JSON collection of row objects to the scope of the query callback.   	

  edge-sql for a production system is a strike out (w/backwards K)
  1. returns only the first result requested no matter how
    many results produced
  2. the javascript file containing edge.func text is vulnerable to
    SQL injection hijack by adding a semicolon followed by any valid T-SQL
    command or statement provided first word in edge.func callback comment
    is insert, update, delete or select (not case sensitive)
  STEEEEERIKE 3. the connection is made with the security context of the
    Windows user running the script so database permissions and data can
    be hijacked	through an attack on the file with an edge.func('sql')
*/

var edge = require('edge');
var mongoClient = require('mongodb').MongoClient;

var mongoURI = 'mongodb://localhost:27017/test';

// named function expressions (compile time)
var sqlProductCursor = edge.func( 'sql', function () {/*
  SELECT  ProductID
        , ProductCategoryID
        , ProductModelID
   FROM SalesLT.Product;
*/});
var sqlProduct = edge.func( 'sql', function () {/*
  SELECT  ProductID AS _id
        , Name
        , ProductNumber
        , Color
        , StandardCost
        , ListPrice
        , Size
        , Weight
        , SellStartDate
        , SellEndDate
        , DiscontinuedDate
        , ThumbnailPhoto -- varbinary MAX!
        , ThumbnailPhotoFileName
        , rowguid
        , ModifiedDate
  FROM SalesLT.Product
  WHERE ProductID = @ProductID;
*/});
var sqlProductCategory =  edge.func( 'sql', function () {/*
  SELECT ProductCategoryID
       , ParentProductCategoryID
       , Name
  FROM SalesLT.ProductCategory
    WHERE ProductCategoryID = @ProductCategoryID;
*/} );
var sqlProductModel = edge.func( 'sql', function () {/*
  SELECT ProductModelID
        , Name
        , CatalogDescription
        , ModifiedDate
  FROM SalesLT.ProductModel
  WHERE ProductModelID = @ProductModelID;
*/});
var sqlProductModelProductDescription =
  edge.func( 'sql', function () {/*
    SELECT 	pmpd.ProductDescriptionID
          , pmpd.Culture
          , pd.Description
          , pd.ModifiedDate
    FROM SalesLT.ProductModelProductDescription AS pmpd
    LEFT JOIN SalesLT.ProductDescription AS pd
    ON pmpd.ProductDescriptionID = pd.ProductDescriptionID
    WHERE ProductModelID = @ProductModelID;
*/});		

mongoClient.connect( mongoURI, function( error, db ) {
  db.collection('products').drop();
});	 

mongoClient.connect( mongoURI, function( error, db ) {
  sqlProductCursor( null, function( error, sqlp ) {
    if ( error ) throw error;
    for (var i=0; i < sqlp.length; i++) {
      ( function (j) {
          sqlProduct (
            { "ProductID" : j.ProductID },
            function ( error, product ) {
              sqlProductCategory (
                { "ProductCategoryID" : j.ProductCategoryID },
                function ( error, category ) {
                  sqlProductModel (
                    { "ProductModelID" : j.ProductModelID },
                    function ( error, model ) {
                      sqlProductModelProductDescription (
                        {	"ProductModelID" : j.ProductModelID },
                        function ( error, descrs ) {
                          model[0].descrs = descrs;
                          product[0].category = category[0];
                          product[0].model = model[0];
                          db.collection('products').insert( product ,
                            function( error, inserted ) {
                              if (error) throw error;
                       		  });
                        });	// descrs
                    }); // model
                  }); // category
            }); // product
        })(sqlp[i]); // product closure
      }
    });
});	 

That’s all there is to it.

Posted in NoSQL | Leave a comment

Background Checks for EVERYBODY!

A background check simply filters and formats information about a person’s life into a somewhat standard and therefore useful form. Much of the information in a background check is already out there in the public domain. Most of the rest is already controlled and/or owned by the government. What is missing are the filters and formats that render the data useful to humanity.

Anyone that has never done so might be shocked to see what anyone can learn about them at web sites like Pipl.com for free. And anyone willing to pay for it can look deeply into your background without your permission if they are willing to pay a little. To be sure, anybody in the world can pay a small amount in bitcoins to see a shocking amount of information about you. That is to say, check your background without leaving a trace.

US Government agencies like the IRS, NSA, FBI, CIA and ATF; industrial surveillance engines like Google, Bing and Yahoo; as well as banks, insurance companies, retailers and wholesalers and their allied data processors collect giga-scads of data useful to check a person’s background. Ever so slowly they are sharing select bits of this information but so far everyone still sucks at cooperating to render this information useful to humanity thanks to self interests (e.g., profit, criminality, and fear of reprisal, redress or revenge for the shady ways our background information is now used).

I have previously blogged about the US Government Agency’s mandate for unfettered, unquestioned and un-American access to ‘pen and tap’ data in every data center and central office in America. What they cannot take for the asking or with a little coercion, they steal. Government regulations only make the situation worse at this time. HIPPA, for example, sorta standardizes health information and compels health care providers to store your medical records in a supposedly secure electronic form. Combined with the now widely suspected to be compromise cryptography I have little doubt left that government agencies are licking their chops over this data. So why isn’t there a plan to be of service to the people this data is about? That’s all I want to know.

Government data proper can often be classified “public information”. Yet we the public have to know who to ask or pay – and too often it seems the secret word and/or the appropriate political alignment – to see it.

Stores routinely collect video surveillance and purchase transaction data.

Many 24-7 news operations exist at the local, nationalal and international levels.

The data is already out there. What is missing is a standard way of using the already collected and in many cases even already aggregated data so that the right decisions can be made at the right time.

The Business Intelligence (BI) tools now in use and the data already being collected are enough to develop humane algorithms that can determine such things as who should or should not buy a gun – and assist with implementation planning that doesn’t end in a shoot-out.

A background check must add a measure of transparency and accountability to the information already out there about a person. With transparency in background checks you could know as well as anyone the information out there about you and where it came from. You could even work to address any errors – and weaknesses – that show up in your background. People would get a background check every time they create an entry or update in the background data set. That provides an accountable path for people wishing to seek corrections in information is incorrect, inaccurate or inappropriate. Make no mistake, your data is out there now. Right now who uses it and how they use it is completely out of your view. It can easily be used to deceive you and easily manipulated to deceive other’s about who you are. And in today’s world there is not a thing you can do about it.

On the other hand, imagine how anyone might react when they ‘fail’ a background check when trying to buy a gun. Background checks would surely make a bad situation worse if doled out as a pass or fail ticket awarded to one lawful citizen but not another lawful citizen by the government, especially if the ‘fail’ meant that person could not legally buy a gun – perhaps due to a mental health issue the person did know or had not been able to accept they had. Couple that with the fact that most gun buyer’s in the US today are not buying their first gun: they are already armed.

Imagine, on the other hand, how it might affect your vote if you knew which politicians had a history of mental illness or regularly visited porn sites or which – if any – of the judges you are supposed to approve at each election were blatantly corrupt or presciption drug abusers or something worse like child abusers or organized crime bosses. Google can tell you such judges are indeed among us.

Imagine if those responsible for hiring our teachers and police had similar information when making hiring, promotion and teacher retention decisions. And imagine we had the same information about teachers and police as the people that hire teachers and police? And teachers and police had the same information about us. And we had access to that information about our Doctor or car mechanic or date? Shouldn’t everyone be exposed to the same level of scrutiny? NRA spokes person Wayne La Pierre? President Barack Obama? And you? And me! And my sister whom I can assure you is one of the finest people on the planet? It’s way too late to set the sites on who should get such scrutiny. It’s happening now to everyone but not equally and completely without transparency or proper oversight. It’s time to move out from behind the opaque and sound-proof walls of denial. This ain’t no damn game.

Still and all, I do wonder what you would do with those of us who don’t meet your background check sniff test. After living in this society where people prefer not to know their neighbors all my life, that truth could be so disturbing that even greater chaos ensues. So many are now armed with assault weapons and so few police have the skills required to recognize let alone council there probably is not a lot anyone can do about guns for generations without a massive weaponized domestic drone campaign to ‘take away the guns’ from the cold dead droned hands of those labeled “should not have guns”. Intervention will remain a point of friction that requires the minds of our best trained and most skillful scientists, clergy and communicators.

It is beyond belief that the political system has even been discussing a plan that could so obviously end with a “take his guns away” order. That’s just not going to happen but is clearly a place background checks for gun buyers will take us.

That seems to be where the conversation is stuck. Some folks seem to believe background checks are a waste of time and won’t help anything. Another side is saying background checks are not enough and even more rules and regulations are necessary. Meanwhile the politicians provide us the predictable dis-services of misinformation, stonewalling and feckless bullshit in the hope that noting changes other than the number of zeros in their bank accounts. 

To get the conversation moving we have to stop singling out any narrow slice of the population (e.g., gun buyers, disturbed individuals, terrorists, tree huggers or religious fundamentalists) as the source of the problem and so aim prevention at the decimation of stereotypical yet otherwise lawful abiding persons as a magical elixir. Criminals, haters and lunatics will find ways to do their deeds whether or not they can legally buy a gun. We need a system that ameliorates aberrant behaviors before there are headlines of death and disaster. Continuous cradle-to-grave background checks and a co-operative of trans-personal counseling to reach out to one another regularly to help us understand what our background check is signaling is essential. Unfortunately, those in positions of power would place too much of their power at risk under such a structure simply because EVERYBODY has a few red flags in their background.  

The American public is apparently 90% behind the need for background checks for potential gun owners. Seems to me like everyone in the county is a potential gun owner and can easily circumvent the ATF just like we have been doing all along. Doesn’t that technically make everyone a candidate for a pre-purchase background check? And since anyone can buy a gun at any time isn’t it important to maintain that background check? We can have a system that

  • allows anyone to buy a gun anytime they want – if that’s what we want
  • doesn’t wait until someone is buying a gun to help them with the issues identified through background check
  • does the needful to protect the community from all risks uncovered through background check

All political system of our time may be already too corrupt to even dare give lip service to a solution based upon transparency, accountability and human dignity. That is but one disadvantage of the bought-and-paid-for political system we must suffer and the politically biased media that would quickly loose ratings and credibility if accountability and transparency of background checks were the rule for everybody. A reasonable compromise among elected officials is not possible at this time when everything but wikileaks happens behind closed doors. Background checks must be transparent to be beneficial to humanity.

Background checks are already as much a part of life as death and taxes. The problem is current background checks are sloppy, incomplete, inconsistent and subject to corruption. I bet city cops do get a much different background check than seasonal city workers for example. But not at all am I convinced the cop gets the better check of the two.

What we really need to do is agree on what needs to be in a background check and then go about the business of compiling and checking backgrounds consistently throughout the population. What ever it is, I have no doubt that the US Government already has more than enough access to personal information to do this work. The Generals in charge of this access do not even need the OK of legislators or citizens to do this work. But instead of doing this work we are witnessing a very different militarization of law enforcement in this county. A militarization apparently meant to entertain: to show the people they have nothing to fear. A militarization that has not been instrumental in making us any safer but showcases the impotent law enforcement of shock and awe supported by videographers seeking to compose the most dangerous or scary video among the many videographers covering the situation.

Consider the Boston Bombing. Authorities failed to identify the threat or the bombers as a risk before the event. There was no preparation or screening for such bombs. They relied upon video from local businesses previously compelled to install surveillance camera’s for self protection against [lesser?] crimes to identify the bombers. It took all the next day for authorities to filter the data and come up with a couple of freeze frames of video ‘safe’ enough to release to the public. Then authorities had to rely upon the transparency of crowd sourcing to turn the images into a name – but just enough pictures to get that name. All the while, authorities maintained a clear strangle-hold on the media covering the event. This demonstrates once again – and without any doubt – that the government has the technology and the data access to quickly and extensively check any person’s background once they have the person’s name. Unfortunately the late and well orchestrated photo release backfired when the bombers knew their cover was blown and decided to make a run for it. duh. That could have been the end of it but one of the two escaped after a 200+ round + two bomb battle with police instigated by the criminals en route. The authorities continued with a heavily armed show of force as they spent a long hard day searching door to door in riot and combat gear. The camouflage uniformed automatic weapon toting combat cops and armored and camouflaged vehicles in Boston were all over the TV, all day. I was certainly hypnotized to and horrified by the media coverage. Finally the authorities called off the search empty handed at the end of the day and vacated a ‘sheltering in place’ order for a million people in Boston – even though it was not known to be any more or less safe. But involving the people with that little flash of transparency was all it took. A citizen found the bomber in his back yard outside of the search area. Hours later the bomber was finally arrested – only after they had unloaded a couple of many round clips from police assault weapons on the hiding place of what turned out to be an unarmed and badly wounded criminal. All the next day the commanders of the various authorities took turns telling the people what a great job the authorities had done. Now that the event seems past I can only wonder how much faster this tragedy could have been brought to a conclusion if the police were transparent and accountable instead of operating as splintered secret military and para-military operations? Or will now do anything that might improve their chances of preventing bags full of bombs from being spread around the finish line of the Boston Marathon.

Imagine if before-the-crime background checks helped us become aware of people when they drop out of college or lose a job or get divorced. Imagine if we used this background information to reach out to people and help in those ways others can help in such matters. This happens now, sometimes. But mostly in small towns. Even in small towns, too many people are left to their devices to find a way through the most difficult events in modern life. That’s not good enough.

I am certainly not trying to defend the Boston Bombers. I am saying that background checks for everybody just might work if we did it right. And I’m saying that already everyone does get background checks that they are no aware of. I’m talking about a background check with complete transparency and complete accountability. The government would get to stop pretending they do not invade the privacy of citizens inappropriately. Technology leaders like Microsoft, Amazon and Cisco that are desperately looking for recurring revenue streams just like this and have infrastructure to support the communications and processing power needed would find a revenue stream – and would not have to make much a change. I;m not going to give Facebook any credit as a technology leader, but Facebook is a very popular place used by professionals and criminals around the world when checking backgrounds. You’d think the capitalists would jumped on the market potential big time! The recurring revenue potential for universal background checks has to be one of the greatest technology concessionaire’s opportunity of all time!

Neither am I advocating for a Big Brother state. I’m talking about using the data already being collected and aggregated – and so therefore already mandated in accordance with CALEA to be available for the pleasure and [mis]use of law enforcement. I want only to extend the access and knowledge of these agencies and corporations about my background to me and others with a bona fide reason to peer into my background  And of course, I want to know who looks at my background. I believe that the right way to ‘enforce’ background checks is through trans-personal counseling.

I am suggesting that maybe all we have to do is turn giving a shit about each other into a guaranteed revenue stream and capitalism will protect us from homegrown terrorism. But I admit, there is still something about the idea that does not fill me with hope.

The time to make contact with a person that has done something – or enough somethings – that might disallow them to buy a gun or bullets or even a weapon repair part is at the moment they should no longer be allowed to do such a thing. Waiting until someone wants to buy a gun to tell them they cannot does not address the problem. Instead it creates a new problem.

I for one would much rather see the cops equipped with the truth as born by my background than the arrogant and highly militarized uniformed haters that I see in the news clubbing and shooting our unarmed children who gather in protest of blatant injustices of the time. I also note that the cops have an impressive kill rate for angry old men that barricade themselves in their own home so I work hard to remain civil and keep my mouth shut. Perhaps I am just frustrated to see these secret militarized agencies only ever able to arrive on the scene after the evil ones among us have acted with such terrible consequence to innocents.

Posted in Privacy | Leave a comment

Tails from a Diskless Hyper-V

The Amnesic Incognito Live System (Tails) is open source privacy software. Tails “helps you to use the Internet anonymously almost anywhere you go and on any computer but leave no trace…”. This post explores how a Windows private cloud or data center might leverage Tails to harden defense in depth.Tails is delivered as a Debian Linux .iso rootkit – er I mean boot image – configured to enable peer-to-peer encryption of e-mail messages and IM messages plus The Onion Router’s (Tor) anonymous SSL web browsing. Tor’s add-ins set [java]scripting and cookies off for all web sites by default, although the user can elect to allow scripts or cookies on a per site basis. The recommended way to use Tails is to burn a verified download of the .iso onto a write-once DVD and then use that DVD as the boot device to start the computer. Tails is mostly designed and configured to leave no trace in that scenario and to assure that once the verified image is laid down on a DVD it cannot be change.One limitation of this preferred scenario is that you need to reboot the machine a couple of times each time you use Tails. Once to boot and once to remove the footprintleft behind in  memory. Another limitation is that a DVD drivemay not be readily available. Tails developers suggest an almost as secure USB alternative to the DVD, but caution that an ability to surreptitiously modify the kernel is introduced.Tails also allows the user to manually configure local storage opening a potential security hole. Local storage is needed, for example to load cryptographic keys for the secure OTR IM and PGP email messaging aps included for peer to peer privacy. Tails does automajically configure a piece of it’s memory as a RAMdisk allowing keys to be introduced without persistence in theory.Virtualization too can remove the reboot overhead, however the Tails documentation cautions against running Tails as a virtual machine (VM). “The main issue,” they say, “is if the host operating system is compromised with a software keylogger or other malware.” There simply is no facility for the VM to be sure no such spyware exists on the host. The usage I am suggesting below is the inverse of that trust model. Here we will use Tails to isolate the trusted host’s Windows domain from the Internet leveraging virtualization to helppreservethe integrity of the trusted node.From a practical stand point, a better rule of thumb – though still in line with the cautious Tails statement on virtualization – may be to trust a virtual environment only to the extent you trust the underlying host environment(s) that support the virtual machine.

A Windows Domain in a physically secured data center implies that the Domain and the data center Ops and admin staff are trusted. But when you open ports, especially 80/443, into that Domain that trust is at increased risk. Given Hyper-V administrator rights on a Windows 2012 Server – but not while logged in with administrative rights on the server – using Tails from a virtual machine might just be a safer, more secure and self maintaining usability enhancement for a Windows-centric data center or private cloud. 

  • Tails can eliminates many requirements that expose the Windows Domain to the Internet. Internet risks are sandbox-ed on the Linux VM. The Linux instance has no rights or access in the Domain. The Domain has no rights or access to the Linux instance other than via Hyper-V Manager. Most interestingly, Tails boots to a Virtual Machine that has no disk space allocated (other than the RAM disk already mentioned).    
  • Tails will thwart most external traffic analysis efforts by competitors and adversaries. DPI sniffers and pen register access in the outside world will only expose the fact that you have traversed the Internet via SSL traffic to the Tor Network. SSL will prevent most snooping between the VM and the onion servers. No more than a handful of governments – and a few other cartels with adequate processing power - will even have the ability to backdoor through the Certificate Authority or brute force the SSL to actually see where you are going on the Internet.     
  • The Tails developer’s take care of the security updates and other maintenance. To upgrade or patch when used in the read-only diskless Hyper-V configuration, all you need do is download the latest image file. 

Some organizations may be resistant to this idea because Tails will also allow employees to privately and anonymously communicate with the outside world while at work. True enough, the Tor pen register data will simply not provide adequate forensic surveillance detail to know what data center employees are up to. That alone could put the kibosh on Tails from a Diskless Hyper-V. Organizational fear of employees not withstanding, Tails in a Windows data-center presents a robust security profile with excellent usability for those times when the knowledge available on the Internet is urgently needed to help solve a problem or help understand a configuration. I would discourage efforts to configure a backdoor to monitor actual TAILS usage from the host simply because once the back door is opened anybody can walk through: better to put your monitoring energy into making sure there is no back door.

Tails is easy to deploy as a Hyper-V VM on Windows Server 2012 (or Windows 8! with the Hyper-V client):

  • download and verify the file from https://tails.boum.org. No need to burn a DVD. Hyper-V will use the .iso file, although a DVD would work too if that is preferred and will undeniably help to assure the integrity of the image. A shared copy of the iso can be used across an environment. It is necessary to ensure that the VM host computer’s management account and the user account attempting to start the VM have full access to the file share and/or file system folder of the image . 
  • Add a new Virtual Machine in Hyper-V Manager.TailsVM 
  1. Give the VM 512MB of memory (dynamic works as well as static)
  2. Set the BIOS boot order to start with “CD”
  3. Set the .iso file – or physical DVD drive if that option is used – as the VM DVD Drive.
  4. Configure the VM with a virtual network adapter that can get to the Internet.

May 5, 2014 – I notice I had to enable MAC spoofing in HyperV for the Internet Network Adapter when I use the newly released tails version 1. The checkbox is located on the Advanced Features of the Network Adapter of the VM. You will not find the Advanced Features option when accessing the Virtual Switch.  It is a setting of the Network Adapter assigned to the tails VM. I suppose another option would be to remove the MAC address hardwired into tails’ “Auto eth0″ but also would reduce your anonymity. It works this way but that is all the testing I did on it! Use the hardwired MAC if possible. 

  • Start the VM and specify a password for root when prompted. You will need to recreate the root password each time you start the VM in the diskless VM configuration. It can be a different password for each re-start. You should still use a strong password. The isolation of the Internet from the local Domain is dependent upon the security of this password. You never need to know that password again after you type it twice and you don’t want anyone else to know it either… ever.
  • Use the Internet privately and anonymously from your shiny new diskless Virtual Machine.TailsVMDesktop

Iceweasel can browse the Internet just fine in the diskless configuration. Using PGP or OTR, however, both require persisted certificates. That requires disk storage. Instant Messenger and pop email using the tools in Tails won’t happen unless persistent certificates are available. There are probably a number of ways certificate availability can be realized, e.g., RamDisk, network, fob, etc.

A Hyper-V Administrator cannot be prevented from configuring storage inside the Virtual Machine if storage is available to the Hyper-V. (Hint: A Hyper-V Administrator can be prevented from configuring storage inside the Virtual Machine if no storage is available to the Hyper-V user.)

Not a total solution, but gives a very clean ability to jump on the Internet without exposing the domain to the Internet when needed.

(end of post)

 

Posted in Privacy, Secure Data | Leave a comment

For Privacy Open the Source & Close the Back Door

There is no surprise in the recent self-admissions that public corporations like Twitter and Microsoft routinely give our private information away. Without need of warrant, subpoena or oversight – they must give away our pen registers, profiles, email and stored files to any and all local, state and federal agencies of many countries when requested. Those same corporations can, will and do also freely and willing provide our private data stored on their servers or clouds upon request. And will decipher our encrypted private data to assist such surveillance if they can.In most cases this is done with our permission. Most often granting our permission by clicking through the “I accept your privacy policies and I have read them” wall – without actually reading the policy – when creating accounts for portals and other social web sites. A 2012 study found that we each would have to spend about 200 hours a year (with a calculated cost of $781 Billion to GDP) to actually read the privacy policies we agree to. At the same time, the word count in privacy policies is going up, further reducing the likelihood that they will be read and understood. Looks to me like there is a conscious effort to discourage you from reading privacy policies.”Its OK if they need to read my email if it will keep the country safe,” you could be thinking. “It won’t hurt a thing,” is often added to such rationalizations. And as that delusional thinking is continuously exposed for what it is by the incessant announcements that giant globs of our personally identifiable information stored on the servers of public corporations have been leaked to the bad guys through mysterious yet massive spigots in corporate data centers, corporations signal the reminder that government mandated surveillance back doors in the data center (DC) and central office (CO) architectures help provide the weakened security profile criminals rely upon. And, thanks to these server side back doors criminals and marketers enjoy the same back door transparency as government agents or anyone else with an ability to look through the back door. Truth be told, marketers have better back door access than government agencies in many cases. Criminals often rely upon masquerading as an administrator, marketer or agent.

Back doors of any stripe undermine security. Exploiting computing back doors is a common objective of marketers, governments, employers, employees, hackers, crackers, spies and criminals alike. The attraction is that there is no way to tell who has ever been through the back door.

Computing back-doors are not a new phenomenon. By now we should be raising our children to fear root kits. They are just back doors. Trojans? Worms? Other so-called malware – especially when the malware can somehow communicate with the outside world. Back doors. SQL Injection? Cross-site scripting? Man-in-the-middle attacks? Key-loggers? Just back doors.

I need to take it one step further though. To a place where developers and administrators begin to get uncomfortable. Scripting languages (PowerShell, c-shell, CL, T-SQL, VBA javascript, and on and on and on) combined with elevated administrative authority? Back doors. That’s right! Today’s COs and DCs – and by extension public Clouds – are severely and intentionally weakened at their very foundation by built-in back doors that have been tightly coupled to the infrastructure. That’s nuts!

We – as consumers and citizens – pay the costs to maintain the very electronic back doors that allow all comers to effortlessly rob us of our earnings, identities and privacy. That sucks!

And we provide the most generous salaries in Society to the politicians, bureaucrats and corporate officers that champion the continuation of these high risk configurations that burp out our private information daily. That’s dumb.

~~~~~

So, how did we get here? It started way before September 11, 2001 that’s for sure. And the process has been deliberate and bi-partisan. The corporate and political demand for unaccounted access to your personal information has been serviced at the DC or CO for as long as there have been DCs and COs.

Mssr. A. G. Bell and Dr. T. Watson started AT&T in 1885.

To implicate modern corporate data stewards, all one need do is look at the explosion in so called “business intelligence” spending to see your data in use in ways that do not serve your interests. Most often by aiding someone else to make money. There is a clear and vested interest by every corporation that can afford it to use all the data at their disposal in the quest to boost the bottom line. Cheap is a people harming virtue of capitalism. This intention to mine your data is one of the topics you could have read about under that “I have read” button of all those privacy policies.

The political interest in electronic back doors can be followed back to the bootleggers during Prohibition. In 1928 the Taft Supreme Court decided (5-4) that obtaining evidence for the apprehension and prosecution of suspects by tapping a telephone is not a violation of a suspects 4th Amendment or 5th Amendment Constitutional Rights.

The Communications Act of 1934 (Roosevelt) granted oversight of consumer privacy to the newly created Federal Communications Commission (FCC).

In 1967 the Warren Supreme Court overruled the 1928 decision (7-1) and said the 4th Amendment does in fact entitle the individual to a “reasonable expectation of privacy.” This was supposed to mean government agents had to obtain a search warrant before listening in on a phone conversation. The confluence of the lawful and clandestine erosion of privacy has become an ocean of mud since that time.

Privacy protection during “any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photo-electronic or photo-optical system that affects interstate or foreign commerce” were revoked in the US – in a bi-partisan fashion – as the Electronic Communications Privacy Act (ECPA) of 1986 (Reagan). ECPA effectively expanded the reach of the Foreign Intelligence Surveillance Act (FISA) of 1978 (Carter) to include US Citizens: heretofore protected by the Bill of Rights from being spied upon by the US government.

No one I know had an email address in 1986. So no one cared that ECPA stripped American citizens of their email privacy. No one I know does not have an email address in 2013. Still, few seem alarmed that there has been no electronic privacy in the US since 1986. Judging by the popularity of the Internet-as-it-is and in the light of the unrelenting and truly awful stories of hacking, identity theft and stalking coming to the fore every day, perhaps nobody even cares?

With the Communications Assistance for Law Enforcement Act (CALEA) of 1994 (Clinton), the full burden of the costs to provision and maintain an expanded ECPA surveillance capability was thrust upon the service provider. Providers were now required to build data centers and central offices with a guaranteed and user friendly listening ability for ECPA surveillance agents: the back door became a government mandate.

The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT ACT) of 2001 (Bush 2) removed any need to notify an individual that they had been under surveillance unless they arrest and charge that individual. The burden of electronic privacy was placed squarely on the individual. Privacy officially died.

Even now agencies play games with the USA PATRIOT ACT by not charging US non-citizens and holding them in indefinite detention, perhaps in part to avoid having to disclose surveillance methods should the matter come to trial? I have no way to know what they are doing, but the pattern of escalating surveillance permissiveness in legislation over time suggest that it is only a matter of time before the ability to hold citizens without charge becomes an effective sledge hammer methodology for agencies and eventually the police. History is clear that such detainment will be used and will be used inappropriately.

Still the politicians remain unsatisfied? In 2008, FISA was amended to effectively eliminate the distinction in agency surveillance of an enemy combatant and a citizen. Now everyone is ‘the enemy’ through the FISA visor. FISA Amendment changes continue to ripple through ECPA, CALEA and US PATRIOT ACT regulations in an expansion of authority to a force already claimed by its leadership to be stretched to thin to be able to keep track of what they are doing accompanied by a decrease in already inadequate and faltering judicial oversight.

In 2006 and then again in 2011 the US PATRIOT ACT regulations that were supposed to expire because they would make the country safe enough in a limited time to not be needed in the future were extended… and re-extended.

Recently the NSA claimed it would violate our privacy if they secretly told two US Senators approximately how many citizens they had electronically spied on. Why is that not disturbing to most? It is worth noting that the Generals of the NSA – yes, the US Military call the shots on the privacy for all American Citizens – made it clear at that time that perhaps no-one has a way to tell who has and has not been electronically spied upon – as an alternative way to explain why they could not answer the Senators’ question.

It might be OK if privacy had been fairly traded for security, but that has not happened. Instead, the government has given our privacy to these unaccountable agencies and the terrorism continues. The police and other agencies are arriving in time to clean up the mess, spending shit loads of the public’s money purring on a show, and spinning the truth about how much these laws are helping. They may be getting better at stopping the second terror attack of a particular stripe, but that is only valuable to society when the bad guys repeat a type of attack. So far, that is not happening. The agencies are being pwned big time and don’t even notice because they are busy reading your email.

The 4th Amendment is null and void. The 9th Amendment is now about exceptions and exclusion to rights instead of the protection of rights not named elsewhere in the Bill of Rights as the unchanged text of the amendment would suggest. If I understand correctly, even the 1st Amendment has been struck. I’m not a constitutional expert, but I am 100% positive privacy is out the window like a baby toy. And now we are too far down that road to even think about going back to find it.

Our government is now self-empowered to spy on the people, self-evidently convinced it must spy on the people and self-authorized to kill its own citizens without trial. This is territory most inconsistent with the Constitution of the United States as I understand it and wholly unacceptable to the vast majority of the citizenry as far as I can tell. Indeed, what people on earth should tolerate such governance?

~~~~~   

So, what can be done? Here are some easy steps any law abiding US citizen can take to shoulder the burden of their own online privacy: because if you don’t, nobody else will.

  1. Never write, type, post or say anything [on-line] you do not want others to see, read, overhear or attribute to you. Anything you put on the Internet just may be out there forever. IBM has boasted the ability to store a bit of data in 12 atoms. Quantum data storage is just around corner. There is no plan to delete anything.
  2. Accept that all the information you may have placed online at any time up to today – and all so called ‘pen registers’ that go with you information – are irrevocably compromised. You will never know the extent of compromise until it is too late to do anything about about. The most important action you can take to begin protecting your privacy – from this moment forward – is to use only peer reviewed Open Source privacy enabling software when connected to the Internet.
  3. Stop using Social web sites. There are many ways to keep track of other peoples’ birthdays. if their birthday is truly important to you. There is not much worth saying that can be properly said in one sentence or phrase and understood by boodles of others. Makes for good circus and gives people something to do when there is nothing appealing on TV but not good for communication or privacy. That, in essence cheapens life. And it makes a few people a lot of money.
  4. Stop using search engines and portals that know who you are. They get their money by looking through the back door and they keep the history. Maybe forever? This data is not generally encrypted, nor even considered your data. Nonetheless, anyone that can hack into this not-your-data has the information needed to recreate your search history. Data Warehouses often leave this data available to business analysts, developers and network engineers in a nicely prepared drill-down that can drill-down from the highest aggregations (e.g. annual sales or total population) to the actions and details of an individual in a few clicks.
  5. Do not go online while logged on to your PC as the administrator or when your Android device is rooted. (FWIW: None of the software mentioned below requires a rooted smartphone or a local administrator login on a PC during use. And while I’m qualifying what I have not yet mentioned, the Android software is all available on Google Play.)
  6. Protect your browsing, browse history and searches against traffic analysis – a favorite pen register surveillance freebie for governments (foreign & domestic), marketers (not so much the foreign ones because the ROI is poor in unexposed markets), and criminals (foreign & domestic). One way might be to surf only from freely accessible public terminals and never sign-in to an online account while surfing from that terminal. An easier and open source way may be to hit the Tor onion servers using FireFox and Orbot from your Android device or the Tor browser bundle from your Linux desktop (we have no way to know if Windows or Mac desktop are backdoored) to obfuscate your identity as your browse. You could even combine the two approaches with tails.
  7. Use open source software with peer-to-peer public key cryptography to encrypt your e-mail before it leaves your device with a key that is only shared with people you know. Never store your messages, even if encrypted by you, on a mail server else you forgo your right to privacy by default. I think US law actually says something like when your email is stored on somebody else’s mail server it belongs to that somebody else not to you. Even Outlook would be better, but Thunderbird with the Enigma OpenPGP add-in is a proven option for PGP encryption using any POP account. The hard part will be re-learning to take responsibility for your own email after becoming accustomed to unlimited public storage (and unfettered back door access). It will also become your responsibility to educate your friends and family about the risks to convince them to use peer-to-peer public key cryptography and secure behaviors too. Until then your private communications to those people will continue to leak out no matter what you do to protect yourself.
  8. Use open source software with peer-to-peer public key cryptography for IM. Try GibberBot connected through the Tor network on your Android device. If used by all parties to the chat, this approach will obfuscate some of your pen registers at the DC and all of your message text. Installing Jitsi adds peer-to-peer cryptography to most popular desktop Instant Messaging clients. Jitsi does not close any back doors or other vulnerabilities in IM software. Your pen registers will still be available at the server and attributable to you but your private information will only be exposed as encrypted jibberish. Using the onion servers with Jitsi or GibberBot will help obfuscate your machine specific metadata but the IM server will still know it is your account sending the message. Security experts seem convinced that Apples loudly advertises the iMessage back-door: http://blog.cryptographyengineering.com/2012/08/dear-apple-please-set-imessage-free.html
  9. Use open source software with peer-to-peer public key cryptography for SMS. TextSecure from Open WisperSystems is good stuff for Android. I don’t have a tested desktop suggestion for SMS. My SIP provider includes SMS support to my DID but Jitsi is not aware of that capability while the SIP account is registered and working? I have not looked everywhere, but so far have not come across a secure open source SMS app for the desktop. Know of one? Please, add it to the comments section below.
  10. Use open source software with peer-to-peer public key cryptography for VOIP voice communications. VOIP providers are subject to the same pen and tap requirements as other phone technologies. The difference between VOIP and System 7 or cellular switching is open source. With VOIP several open source apps are available now that use Phil Zimmerman’s Realtime Transport Protocol (ZRPT) to implement peer-to-peer encryption of VOIP conversations. Jitsi includes ZRPT by default for all SIP accounts registered. When a call is connected the call is encrypted provided the other party to the call is also using a ZRPT such as like Jitsi or Zfone. Privacy advocates state unequivocally that Jitsi is a better alternative than Skype. http://apapadop.wordpress.com/2012/07/05/a-skype-alternative-worth-its-salt-jitsi/ Improve privacy by using an OpenVPN connection to the SIP server.

As you can see, effective tools are available for anyone that values on-line privacy. The bad guys are already using these tools! The challenge is a human behavioral issue that requires change and awareness. That is to say, only if everyone you communicate with also values privacy and also moves to peer-to-peer encryption models on Open Source Software can that software provide you with privacy. Again, the bad guys are already making the necessary change.

Sadly, I’m not sure how to convince the [American] masses to not sheepishly flock toward the loudest hype. With each catastrophe perpetrated by the very bad guys that the rape of our privacy was supposed to protect, the media immediately and loudly lauds the police and other agencies for doing such a great job and how lucky we are to have them. The agencies, for their part, continue to bungle case after case and maintain crafty spokespeople to pluck victory from the jaws of yet another shameful defeat. Turns out they don’t even use the pen registers and tap access as much for surveillance as for forensics: to come in and sweep up the mess left behind by the last bad guy that duped them. Why are the agencies not preventing the horrible events used to attack our privacy?

Then, when the dust settles, the politicians jump into the media limelight to demand a more forceful attack on privacy in general to solve the problem once and for all yet again. And around and around we go. The tactics do not work. How much proof do you need?

At least I can let you know what was easy enough to use and has worked for me in an effort protect my privacy. I have to admit thought, I do not get to use the tools so much because nobody else I know is using them. For privacy, all people in the conversation must use these tools.

SilentCircle.com claims to have everything you could hope for by way of communication privacy for iPhone or Android. It works best if you only communicate with others within the SilentCircle network because that assures that everybody is using the tools. The advertised price is only $20 a month including a phone number. If I read thing correctly that morphs into $49 a month if you want to be able to communicate with others that have not paid the $20. But for the $49 you get 3000 minutes of [half-protected] calling to almost any number in the world. You must provide the network connection on your end. They promise peer-to-peer encryption for phone, text and video and include PGP encryption for email. All with the one account. Phil Zimmererman is one of ‘em for cryin’ out loud. The Silent Circle privacy policy says about what you thought the privacy policies you clicked through all those times would say (but didn’t).  However, SilentCircle is not truly open source. Furthermore, the site policy promises only to “never sell this data or give access to unauthorized third parties”. We have no idea if there is now – or will ever be – a back door built-in to the encryption of this software that must be installed on our device in order to use the service.

To be clear I have never tried Silent Circle, I only considered that option because of the media noise around Silent Circle. If you don’t need 3000 minutes a months and are open to a Open Stack SIP provider consider Diamondcard.us. Last I checked I was averaging $7.23/mo total: home phone, soft phone and smartphone. All with one phone number. AS a bonus I contribute to Open Source software simply by using the ultra-low cost service.

SIP needs the Internet. I connect to the Internet through ADSL at home and I use the smartphone through the Wifi connection of a pocket sized 4G Access Point. The ADSL cost is not included in the $7.23 because I had it before I used SIP and I would have it if I stopped using SIP unless they kill the Internet. I am able to talk at home, while walking down the street in Denver or from a parking lot in Wyoming using one phone number. I can actually any WiFi access point. For example, I had heard a loud media complaint that the Nexus 4 can’t use 4G. Mine does and it is not costing me a thing*. Works just like connecting to any other WPA2 WiFi access point. And it works good! Even my laptop has better call quality over 4G than it does with the sometimes poor quality CenturyLink residential ADSL in my neighborhood.

If you order a DID (phone number) from Diamondcard.us you get their support plus low pay-for-what-you-use call rates plus – that little extra ginzu magic? – I can help you get set up with a more secure soft phone than the IAX2 unit they provide or the peer-to-peer apps mentioned above.

I use most of the open source software mentioned here on my devices at this time.

  1. a Windows 8 laptop (well, actually Windows Server 2012 Standard SP1, but for all practical purposes its just Windows 8) while logged in as an otherwise unprivileged user (e.g. not the PC|Domain administrator)
  2. a never rooted and never locked Google Nexus 4 running Jelly Bean 4.2.2 with face recognition enabled on the lock screen.

If you decide on a SIP account consider www.Diamondcard.us I might make a few pennies if you click this link (and it doesn’t cost you any more or less either way) and I will help you get your devices up and running with the Diamondcard services and open source software with peer-to-peer public key cryptography if your device can do it either way. Just ask.

My first recommendation when firing up any new SIP connection would be that it is better to get connected and make a successful call with software the provider recommends before changing security settings or switching to a more secure soft phone. My second recommendation is that SIP is low enough in cost and good security is co commonly NOT practices that you can afford to experiment and learn for a while before you have what you are using now shut off and move to communications where you control your privacy. And if there is a problem with the connection the SIP provider is less reluctant to even look for a problem with service if they are not familiar with the configuration on your device.

SIP has by far the most potential for better privacy of the currently available and affordable centrally switched electronic voice communication technologies. It would be cool if SIP – and the entire Internet – were peer-to-peer. It’s technically possible. Fee-for-Service Global networks are simply not do-able without centrally switched services. Small independent SIP providers like Diamondcard still have to comply with the laws with regard to surveillance access. The difference is that the independent guy in any capital system must maintain a higher standard of integrity in order to overcome the scurrilous competitive advantage and brute force marketing capabilities of public corporations. If you are motivated to get a SIP account somewhere other than Diamondcard and have a problem or question, please don’t hesitate to contact me. I’m glad to help if I can. I am not an expert on the topic, just a database guy working to protect his privacy willing to help others do the same.

Until folks awaken to the virtues of open source software and once again see the fundamental value in privacy of communications my privacy will remain vulnerable too.

Posted in Privacy, Secure Data | Leave a comment

It’s [Still!] the SQL Injection… Stupid

Did you see Imperva’s October 2012 Hacker Intelligence Report? The report is a data mining study directed toward the on-line forum behaviors among a purportedly representative group of hackers. The milestone for October 2012 is that Imperva now has a year’s worth of data upon which to report. Almost a half a million threads in the mine. In this study, keyword analysis found SQL injection sharing the top of the heap with DDOS in terms of what hackers talk about in the forums. In the wake of Powershell 3.0 – the study also identifies shell code as the big up-and-comer for mentions in the hacker threads. Only 11% of the forum posts mention “brute force”. “Brute force” being the only topical category Imperva charted with a direct relationship to cryptography.

The absence of an aggregate specifically dedicated to cryptography or encryption strongly suggest the hackers are not talking much about cryptography. Hmmmm.

Keyword frequency in 439,587 forum threads:

  1. SQL injection 19%
  2. dDos 19%
  3. shell code 15%
  4. spam 14%
  5. XXS 12%
  6. brute force 11%
  7. HTML injection 9%

The report also cites disturbing data from a 2012 Gartner study, “Worldwide Spending on Security by Technology Segment, Country and Region, 2010-2016″. The Gartner study purportedly finds that less than 5% of money spent on computing security products buys products that are useful against SQL injection.

Holy octopus slippers! The statistics definitely make a good case for taking a look at Imperva’s products. Even if there is some marketing bias in the study – I don’t think so, but saying even if there is – the findings are more bad news for data security. Whats worse is we seem to be headed in the wrong direction.  Consider:

  • The SQL Server 2008 Books Online had a page of SQL injection prevention best practices that is removed from the SQL Server 2012 edition.
  • SQL injection prevention guidance is largely unchanged for many years yet is not widely followed. Input validation is the key. Judging by the hacker interest in SQL injection, adoption of the guidance must be low.
  • Hackmageddon.com’s Cyber Attack Statistics indicate that SQL injection is found in over 25% of the hacker attacks documented during October 2012.
  • Cloud host FireHost – a WMWare base web host with an impressive security claim and a data centers in Phoenix, Dallas, London and Amsterdam –  reported that attacks the Firehost infrastructure had detected and defend saw a 69% spike in SQL injection attacks in second quarter 2012 ( and then cross site scripting surged in just ended Q3).

What is stopping us from truly going after this problem? Input validation is not hard and need not be a bottleneck. Threat detection is supposedly a part of any ACE/PCI/FIPS/HIPPA  compliant system. Detection avoidance is well understood. Nonetheless, and for reason$ far beyond reasonable, the strategic emphasis remains stuck on compliance. That would be OK if the standards of compliance were even close to adequate. Clearly they are not. That proof is in the pudding.

There are SQL injection detection and avoidance products out there that work. Many many products. Just to name a few – and by way of describing the types of tools that forward thinking organizations are already buying in an effort to eradicate SQL injection:

Application Delivery Network (ADN)

  • Citrix NetScaler
  • F5s BigIP
  • Fortinet  FortiGate

Web Application Firewall (WAF)

  • Applicature dotDefender WAF
  • Cisco ACE Web Application Firwewall
  • Imperva Web Application Firewall & Cloud WAF
  • Barracuda Networks Web Application Firewall
  • armorize’s SmartWAF (Web server host based)

Web Vulnerability Scanners (WVS)

  • sqlmap (free)
  • Acunetix WVS

Unified Threat Management (UTM)

  • Checkpoint UTM-1 Total Security Appliance
  • Saphos UTM Web Server Protection
  • Watchguard XTM

These products and the overarching practice changes needed to implement them show success in going after the problem. But, as the Gartner study shows, nobody seems to be buying it.

There are also cloudy platform hosts and ISPs like FireHost that handle the WAF for organizations that cannot justify the capital costs and FTEs required to do the job right in-house due to scale.

Ever so slowly the major hosts are imposing WAF upon all tenants. Another n years at the current snail’s pace and the security improvement just might be noticeable. Seems to me like “do it and and do it now” is the only choice that can reverse a ridiculous situation that has gone on too long already. Even secure hosts prioritize profitability over basic security. That is rent seeking.

Any web presence that takes another tact is telegraphing priorities that violate the duty to protect that which is borrowed under an explicit promise of confidentiality or generally accepted fiduciary performance levels equivalent to all other financial and intellectual assets of that organization. Few meet the sniff test. Many remain highly profitable. Just take a look in your Facebook mirror. Customer’s and consumers have no visibility into how any organization will honor this responsibility nor recourse when that duty is shirked. The metrics above make clear that poor security practices shirk the responsibility and carelessly feed the identity theft racket. It must end. Organizations that perpetuate this status quo and remain profitable are complicit in an ongoing crime of  against humanity. IT staff who are fairly characterized as “team players” or leaders in such organizations are every bit as culpable as the soldiers of Auschwitcz or My Lai or the owner of a piano with Ivory keys.

Organizations private and public have a fundamental obligation to protect customers, clients and citizens from illegal exploitation. What in the world makes it OK to exclude chronic identity theft violations from that responsibility?

Even when the data center budget includes one of the more robust solutions; to have done the needful in terms of basic input validation, code/user authentication and principal-of-least-privilege access rights is essential for any good defense-in-depth security strategy.

Consider this T-SQL function from the Encryption Hierarchy Administration schema that implements a passphrase hardness test based upon the SQL injection prevention guidance in the SQL 2008 BOL.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: password/passphrase gauntlet
4   --    phrases are frequently used in dynamic SQL so SQL Injection is risk
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckPhrase 
7     ( @tvp AS NAMEVALUETYPE READONLY )
8   RETURNS @metatvp TABLE 
9     ( Status NVARCHAR (36)
10    , Signature VARBINARY (128) )
11  $(WITH_OPTIONS)
12  AS
13  BEGIN
14    DECLARE @Status NVARCHAR (36)
15          , @Name NVARCHAR(448)
16          , @UpValue NVARCHAR (128) 
17          , @Value NVARCHAR (128) ;
18    -- dft password policy as described in 2008R2 BOL + SQL Injection black list
19    -- fyi: SELECT CAST(NEWID() AS VARCHAR(128)) returns a valid password 
20    SET @Status = 'authenticity';
21    IF EXISTS ( SELECT *
22                FROM sys.certificates c
23                JOIN sys.crypt_properties cp
24                ON c.thumbprint = cp.thumbprint
25                CROSS JOIN sys.database_role_members r
26                WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
27                AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
28                AND c.name = '$(OBJECT_CERTIFICATE)'
29              AND c.pvt_key_encryption_type = 'PW'
30                AND cp.major_id = @@PROCID 
31                AND @@NESTLEVEL > 1 -- no direct exec of function 
32                AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
33                AND EXISTS ( SELECT * FROM sys.database_role_members 
34                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
35                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
36                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) )        
37      BEGIN
38        SET @Status = 'decode';
39        SET @Name = ( SELECT DECRYPTBYKEY( Name 
40                                         , 1
41                                         , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NVARCHAR (36) ) ) 
42        FROM @tvp );
43        SET @Value = ( SELECT DECRYPTBYKEY( Value, 1, @Name ) FROM @tvp );                    
44        IF PATINDEX('%.CONFIG', UPPER(@Name) )  -- no strength test, will fall through 
45         + PATINDEX('%.IDENTITY', UPPER(@Name) )             
46         + PATINDEX('%.PRIVATE', UPPER(@Name) ) 
47         + PATINDEX('%.SALT', UPPER(@Name) )           
48         + PATINDEX('%.SOURCE', UPPER(@Name) ) > 0       
49          SET @Status = 'OK';
50        ELSE
51          BEGIN
52            SET @UpValue = UPPER(@Value);
53            SET @Status = 'strength';
54            IF ( (    ( LEN(@Value) >= $(MIN_PHRASE_LENGTH) )   -- more is better
55                  AND ( PATINDEX('%[#,.;:]%'
56                      , @Value ) = 0 )   -- none of these symbols as recommended in BOL 
57                  AND ( SELECT CASE WHEN PATINDEX('%[A-Z]%'
58                                                  , @Value) > 0 
59                                    THEN 1 ELSE 0 END    -- has uppercase
60                              + CASE WHEN PATINDEX('%[a-z]%'
61                                                  , @Value) > 0 
62                                    THEN 1 ELSE 0 END    -- has lowercase  
63                              + CASE WHEN PATINDEX('%[0-9]%'
64                                                  , @Value) > 0 
65                                  THEN 1 ELSE 0 END    -- has number
66                              + CASE WHEN PATINDEX('%^[A-Z], ^[a-z], ^[0-9]%' -- has special
67                                                  , REPLACE( @Value,SPACE(1),'' ) ) 
68                       ) > 0  
69                                    THEN 1 ELSE 0 END ) > 2 ) )   -- at least 3 of 4
70              BEGIN 
71                -- black list is not so strong but can look for the obvious 
72                SET @Status = 'injection';                       
73                IF ( PATINDEX('%[__"'']%', @UpValue)   -- underscore (so no sp_ or xp_) or quotes
74                   + PATINDEX('%DROP%'   , @UpValue)   -- multi-character commands... 
75                   + PATINDEX('%ADD%'    , @UpValue)
76                   + PATINDEX('%CREATE%' , @UpValue)
77                   + PATINDEX('%SELECT%' , @UpValue)
78                   + PATINDEX('%INSERT%' , @UpValue)
79                   + PATINDEX('%UPDATE%' , @UpValue)
80                   + PATINDEX('%DELETE%' , @UpValue)
81                   + PATINDEX('%GRANT%'  , @UpValue)
82                   + PATINDEX('%REVOKE%' , @UpValue)
83                   + PATINDEX('%RUNAS%'  , @UpValue)
84                   + PATINDEX('%ALTER%'  , @UpValue)
85                   + PATINDEX('%EXEC%'   , @UpValue)
86                   + PATINDEX('%--%'     , @Value)     -- comments...
87                   + PATINDEX('%**/%'    , @Value) 
88                   + PATINDEX('%/**%'    , @Value)  = 0 )
89                  BEGIN 
90                    SET @Status = 'duplicate';
91                    IF NOT EXISTS ( SELECT *                  -- not already used  
92                                    FROM $(EHA_SCHEMA).$(NAMEVALUES_TABLE) n
93                                    WHERE ValueBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
94                                                                              , '$(EHA_SCHEMA)'
95                                                                              , '$(NAMEVALUES_TABLE)'
96                                                                              , 'ValueBucket' 
97                                                                              , @Value)
98                                    AND CAST(DecryptByKey( n.Value -- should be rare
99                                                          , 1
100                                                         , @Name ) AS NVARCHAR (128) )  =  @Value )  
101                     SET @Status = 'OK';
102                 END
103             END
104          END
105     END
106   INSERT @metatvp
107     ( Status
108     , Signature ) 
109   VALUES 
110     ( @Status
111    , SignByCert( CERT_ID('$(AUTHENTICITY_CERTIFICATE)'), @Status ) );
112   RETURN;
113 END
114 GO
115 ADD SIGNATURE TO $(EHA_SCHEMA).CheckPhrase 
116 BY CERTIFICATE $(OBJECT_CERTIFICATE)
117 WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
118 GO

SQL injection input validation is only part of what goes on here. The function accepts an already encrypted name value pair TVP as a parameter and returns a signed business rule validation result as a TVP.  To do so, first the schema and user authenticity are verified before the phrase is decoded and the SQL injection/detection rules are applied. Only if all rules are met will an IO be required to verify that the phrase has not already been used.

The bi-directional encoding of parameters with a private session scoped symmetric key helps to narrow the SQL injection threat vector even before the filter(s) can be applied. This means that the passed values have already successfully been used in a T-SQL ENCRYPTBYKEY command in the current database session. Not that encryption does anything to prevent or detect SQL injection. It is more that the first touch of any user input value carries higher risk. Likewise the first use of an input in any dynamic SQL  statement carries a higher risk. Always better to do something benign with user input before you risk rubbing it against your data.

In the process of validation, two black lists are used to filter punctuation (line 55) and specific character sequences (lines 73-88) frequently identified as injection markers.

Another function from the schema validates names for T-SQL Encryption Hierarchy key export files. In this function the black list filter that includes file system specific markers as identified in the same SQL Server 2008 R2 books Online article. The somewhat cumbersome PATINDEX() driven exclusion filter pattern is used in the file name function as is used for hardness testing.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: apply file naming rules and conventions
4   --    name not already in use and no identified sql injection
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckFile 
7     ( @Name VARBINARY (8000) )
8   RETURNS BIT
9   $(WITH_OPTIONS)
10  AS
11  BEGIN
12    RETURN (SELECT CASE WHEN  PATINDEX('%[#,.;:"'']%', Name) 
13                            + PATINDEX('%--%', Name)
14                            + PATINDEX('%*/%', Name)
15                            + PATINDEX('%/*%', Name)
16                            + PATINDEX('%DROP%', Name)
17                            + PATINDEX('%CREATE%', Name)
18                            + PATINDEX('%SELECT%', Name)
19                            + PATINDEX('%INSERT%', Name)
20                            + PATINDEX('%UPDATE%', Name)
21                            + PATINDEX('%DELETE%', Name)
22                            + PATINDEX('%GRANT%', Name)
23                            + PATINDEX('%ALTER%', Name) 
24                            + PATINDEX('%AUX%', Name) 
25                            + PATINDEX('%CLOCK$%', Name) 
26                            + PATINDEX('%COM[1-8]%', Name)
27                            + PATINDEX('%CON%', Name) 
28                            + PATINDEX('%LPT[1-8]%', Name) 
29                            + PATINDEX('%NUL%', Name) 
30                            + PATINDEX('%PRN%', Name) = 0
31                        AND NOT EXISTS 
32                ( SELECT COUNT(*) AS [Existing] 
33                  FROM $(EHA_SCHEMA).$(BACKUP_ACTIVITY_TABLE)
34                  WHERE BackupNameBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
35                                                                , '$(EHA_SCHEMA)'
36                                                                , '$(BACKUP_ACTIVITY_TABLE)'
37                                                                , 'BackupNameBucket' 
38                                                                , Name ) )    
39                        THEN 1 ELSE 0 END
40            FROM (SELECT CAST( DECRYPTBYKEY ( @Name ) AS NVARCHAR(448) ) AS Name  
42                  FROM sys.certificates c
42                  JOIN sys.crypt_properties cp
43                  ON c.thumbprint = cp.thumbprint
44                  CROSS JOIN sys.database_role_members r
45                  WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
46                  AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
47                  AND c.name = '$(OBJECT_CERTIFICATE)'
48                  AND c.pvt_key_encryption_type = 'PW'
49                  AND cp.major_id = @@PROCID 
50                  AND @@NESTLEVEL > 1 
51                  AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
52                  AND EXISTS (SELECT * FROM sys.database_role_members 
53                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
54                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
55                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) ) AS derived );
56  END
57  GO
58  ADD SIGNATURE TO $(EHA_SCHEMA).CheckFile 
59  BY CERTIFICATE $(OBJECT_CERTIFICATE)
60  WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
61  GO

When processing happens one string at a time this filter is of small performance concern. However, processing a large set against such a filter could be slow and disruptive. The objects are obfuscated into the database WITH ENCRYPTION so only those with elevated access, development environment access and source repository access are likely to be aware of the filter details. Most of the logic in the functions is to verify the authority and authenticity of the caller and the calling object.

These functions demonstrate that fundamental SQL injection protection is easily achievable even for an application with the crippled regular expression support of T-SQL. If performance or load is a service level issue, the CLR might be a better host for the functions. However, as with obfuscation, the most important place to validate against SQL injection is at the point where data enters the system. In some cases SQL injection protection done after the invalidated text moves inside SQL Server will be too late. Only when the user interface is a SQL command line could it be the best security choice to validate against SQL injection inside SQL Server. In scope, SQL injection prevention is an application layer exercise. That being said, skipping the SQL injection validation inside SQL Server is reckless. Robust security will employ layers of defense.

In my mind, the command line too must always be considered as an attack vector even if not a favorite SQL injection attack vector at this time and even if the application makes no use of the command line. As the metamorphosis of now soluble security landscape unfolds, the hackers will be shining their cyber-flashlights everywhere and chasing anything shiny. To discount that the command line will continue to get a good share of malicious attention as long as there are command lines and hackers is negligence and/or denial.

For the Encryption Hierarchy Administration schema that uses the functions above a Powershell deployment and administration interface is helpful to improve security. With a pure T-SQL approach there is always a risk of exposure of user input text of secrets in memory buffers before the input value can be ciphered by the database engine. Granted, it is a brief period of time and I am not even sure how one would go about sniffing SQLCMD command line input without running in a debugger or waiting for the input to move into database engine workspace. It surely must be available somewhere in memory. The scenario is a target. I know I have never check if the operating system is running a debugger in any T-SQL script I have ever written. This helps to illustrate why the best time and place to encrypt data is at the time it is generated in the place where it is generated or enters the system. Even then, 100% certainty will remain elusive if the system cannot be verified to be keylogger free.

The utility models a somewhat unusual scenario where encryption at the database is the right choice. Nonetheless, getting the many secrets required for the administration of encryption keys and key backups entered into the system presents a potential for exposure to memory mining hackers. Using SecureString input and SecureString based SMO methods to instantiate the database objects that need the secrets can eliminate much of that vulnerability. As you may know a SecureString is an .NET object .encrypted by a hash from the current user’s session at all times while in memory with user cleanup from memory that can be more secure than garbage collection. It is relatively easy for the user to decrypt the SecureString data on demand but doing so would result in sensitive information becoming available as clear text in memory registers where the un-encrypted copy is written. No other users have access to the encryption key.

  
function Decode-SecureString 
{   
    [CmdletBinding( PositionalBinding=$true )]
    [OutputType( [String] )]
    param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )]
            [System.Security.SecureString] $secureString )  
    begin 
    { $marshal = [System.Runtime.InteropServices.Marshal] }
    process 
    { $BSTR = $marshal::SecureStringToBSTR($secureString )
     $marshal::PtrToStringAuto($BSTR) } 
    end
    { $marshal::ZeroFreeBSTR($BSTR) }
}

Decode-EHSecureString $( ConvertTo-SecureString '1Qa@wSdE3$rFgT'  -AsPlainText -Force )

Powershell obfuscates Read-Host user input of type SecureString with the asterix (*) on the input screen. With the ISE you get a WPF input dialog that more clearly show the prompt but could also become annoying for command-line purists.

To evaluate the use of a Powershell installer, I coded a Powershell installation wrapper for the hub database installation scripts of the utility. The hub database needs 4 secrets: the passwords for four contained database users.  With this change it makes no sense to add an extra trip to the database to evaluate the hardness and SQL injection vulnerability for each secret. Instead, the SQL injection input validation logic from the T-SQL functions above it migrated to a Powershell Advanced Function – Advanced meaning that the function acts like a CmdLet – that accepts a SecureString.

  
1  function Test-EHSecureString 
2  {   
3   [CmdletBinding( PositionalBinding=$true )]
4      [OutputType( [Boolean] )]
5      param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )] 
6              [System.Security.SecureString] $secureString
7            , [Int32] $minLength = 14 
8            , [Int32] $minScore = 3 )  
9      begin 
10     { 
11         $marshal = [System.Runtime.InteropServices.Marshal] 
12     }
13     process 
14     {   # need the var to zero & free unencrypted copy of secret
15         [Int16] $score = 0
16         $BSTR = $marshal::SecureStringToBSTR($secureString)
17         if ( $marshal::PtrToStringAuto($BSTR).length -ge $minLength )
18         { 
19             switch -Regex ( $( $marshal::PtrToStringAuto($BSTR) ) )
20            {
21             '[#,.;:\\]+?' { Write-Warning ( 'character: {0}' -f $Matches[0] ); Break }
22             '(DROP|ADD|CREATE|SELECT|INSERT|UPDATE|DELETE|GRANT|REVOKE|RUNAS|ALTER)+?' 
23                           { Write-Warning ( 'SQL command: {0}' -f $Matches[0] ); Break }
24             '(AUX|CLOCK|COM[1-8]|CON|LPT[1-8]|NUL|PRN)+?' 
25                           { Write-Warning ( 'dos command: {0}' -f $Matches[0] ); Break } 
26             '(--|\*\/|\/\*)+?'{ Write-Warning ( 'comment: {0}' -f $Matches[0] ); Break }
27             '(?-i)[a-z]'  { $score+=1 }
28             '(?-i)[A-Z]'  { $score+=1 }
29             '\d+?'        { $score+=1 }
30             '\S\W+?'      { $score+=1 }
31             Default { Write-Warning $switch.current; Break }        
32            } 
33         }
34         else
35         { write-warning 
36                      ( 'length: {0}' -f $( $marshal::PtrToStringAuto($BSTR).length ) ) } 
37         write-warning ( 'score: {0}' -f $score )  
38         $( $score -ge $minScore )
39     }        
40     end { $marshal::ZeroFreeBSTR($BSTR) }
41 }

One thing for sure. Much less code required in Powershell than T-SQL to create a user. To securely invoke the function a Read-Host -AsSecureString prompts for user input that will go into the memory location allocated to the SecureString and only as encrypted data. Here the Powershell script will prompt for input until it gets an acceptable value. Remember, no one else will be able to decode this value in memory. Only the user that created the SecureString. Defense-in-Depth demands that care is be taken that the SecureString memory location is not taken for an off-line brute force interrogation.

 

do { $SQL_PASSWORD = $(Read-Host 'SQL_PASSWORD?' -AsSecureString ) } 
until ( $(Test-SecureStringHardness $HUB_ADMIN_PASSWORD ) )

Then the database is affected by the entered secret using SMO. In this case a user with password will be created in a contained database.

  
if ( $( Get-ChildItem -Name) -notcontains 'HubAdmin' )
{
    $HubAdmin = New-Object Microsoft.SqlServer.Management.Smo.User
    $HubAdmin.Parent = $smoHubDB
    $HubAdmin.Name = 'HubAdmin'     
    $HubAdmin.Create( $HUB_ADMIN_PASSWORD ) 
}

The Test-SecureString function does expose the clear text of the secret in the PSDebug trace stream during the switch operation as $switch. To my knowledge there is no way to obfuscate the value. On top of that the disposal of the $switch automatic variable is under garbage collection so there is no reliable way to know for sure when you can stop wondering is anyone found it. That uncertainty may be more of a risk than the SQLCMD exposure that the secure string is supposed to solve? On the other hand, the risk of exposure of the SQLCMD setvar values is undetermined so it would be silly to pretend to quantify an unknown risk. What I know for sure is those values have to touch primary storage buffers in order to load the variables and then populate the SQLCMD – $(VARIABLE_NAME) – tokens in the script. At least with the Powershell script I can quantify the risk and take all necessary precaution to mitigate. With SQLCMD setvar variables about all I can do to be certain my secrets are not fished out of the free pool or where ever else they might be buffered as clear text is remove the power. Even that is no guarantee the secrets are not leaked to a paging file, spooler or log while exposed internally or externally as clear text as the system shuts down.

At this point I’m convinced that it is best to validate Powershell SecureString input against SQL injection threats. The risk that someone will find the secrets in local memory and use them with malicious intent is far less than the risk from SQL injection in my estimation. I will continue to integrate Powershell into the install script with the goal of  using a Powershell installer. This is a much better input scenario than the T-SQL event obfuscation technique I had been using.

I will leave the SQL injection filters in the T-SQL functions to complement the Powershell filter for two reasons. Defense-in-depth and [sic]Defense-in-depth. 

Posted in Code Review, Data Loading, Secure Data, Testing | Leave a comment

TSQL Cryptographic Patterns – part 9: we’d better take this OFFLINE

There is a compelling defense-in-depth rationale for enabling AUTO_CLOSE on a database where sensitive data is stored.

ALTER DATABASE $(ANY_USER_DATABASE) SET AUTO_CLOSE ON;

When AUTO_CLOSE is ON the database will cleanly shutdown when the last active user session disconnects or moves to another database. To be cleanly shutdown means that a database can be opened again later without the need for SQL Server to run recovery on that database. Everything in the log has been processed into the data set. FWIW: we don’t get to decide when recovery runs, the database engine makes that determination. We do get to mess around with the CHECKPOINT a little more in SQL Server 2012 with the TARGET_RECOVERY_INTERVAL database option that overrides the server recovery interval. That actually does appears to be a step in the direction of exposing control of AUTO_CLOSE though probably not intentional.

Using AUTO_CLOSE is easy:

  • Once enable there is nothing to do.
  • The easiest way to tell if AUTO_CLOSE is ON is to query the is_auto_close_on column in sys.databases.
  • The easiest way to tell if a database with AUTO_CLOSE ON is cleanly shutdown is to query the is_cleanly_shutdown column in sys.databases.
  • The most eye-opening way to tell if the database is closed at the present time is to copy the .mdf or .ldf. If you can copy the files the database is cleanly shut down, if you cannot the database is open and accumulating resources i.e., data pages, locks, latches, versions, query plans, connections, etc..

(Note that there are a few respectable bloggers claiming that AUTO_CLOSE is marked for deprecation since SQL 2008. I believe there is some confusion. The blogs I have seen with this claim reference the SQL Server 2000 DMO AutoClose Property page as evidence. If you look, you will notice that all the DMO documentation pages for SQL Server 2000 carry the same deprecation warning. Pretty sure DMO is the deprecated technology not AUTO_CLOSE. I could be wrong.)

When a database cleanly shuts down all resources held for that database are freed from memory. Log records are processed such that no recovery is required when the database “opens”. Encryption keys and certificates are closed preventing any free rides on encryption hierarchies opened during legitimate use. DMV data collected from that database disappears. The file system locks on all log and data files are released. Data is flushed from cache. If TDE or file system encryption is in use, this moves all data behind that layer of obfuscation. The unloading is asynchronous, happening within 300ms.

The main difference between a database with AUTO_CLOSE ON when cleanly shutdown and an OFFLINE database is the AUTO part. That is, an administrator must manually transition the database between ONLINE and OFFLINE and back while AUTO_CLOSE automagically transitions the database between the unmodifiable state and the usable state for any valid request.

I notice that databases do not get the is_cleanly_shutdown bit set when the database is taken OFFLINE. While I cannot repro on demand, I also noticed that taking the test database ONLINE will force a recovery when that database goes back ONLINE every now and again. The documentation is clear that an OFFLINE database is cleanly shutdown. Wonder what’s up with that?


SELECT name, is_auto_close_on, state_desc, is_cleanly_shutdown
FROM sys.databases WHERE PATINDEX('Test%', name ) = 1;


name              is_auto_close_on state_desc  is_cleanly_shutdown
----------------- ---------------- ----------  ------------------- 
Test_OFFLINE                     0    OFFLINE                    0 
Test_AUTO_CLOSE                  1    ONLINE                     1 

The pooled connection overhead and bottlenecking that comes with AUTO_CLOSE are fairly well known. Most of the time that is about all one needs to know to avoid AUTO_CLOSE. The experts simply tell us to turn AUTO_CLOSE off and leave it at that. In fact, the best practice policies included in the SQL Server installation will disable AUTO_CLOSE on all databases.

Enabling the best practice policies is far better than not using policies or following the painful trajectory of trial and error to “find” the correct best practices. In all cases beware the dogma. A well-considered policy built upon best practices, patience and perststence is preferred.

Applications that create and store sensitive data are at risk of compromise if adequate considerations are not given to vulnerabilities that exploit the available SQL Server resource metadata and/or SQL Server primary storage buffers. The query cache, for example, can be helpful in understanding the data store and the data flow. This is useful information for man-in-the-middle, SQL Injection attackers or insider hi-jinx. Likewise, the sys.dm_exec_query_requests DMV or sys.sysprocesses compatibility view will point the uninitiated and uninvited to every client serviced by a database host. From there a SQL Injection attacker can map the application, identify weak hosts inside the DMZ and perhaps establish a SQL Injection based command line access targeting the weak internal node. The ways to be hacked are many.

The security implications of database resources are not normally considered in application design. If anything, database architectures error on the side of keeping resources loaded and exposed by making more memory available to the SQL Server. This increases the risks that cached data, data storage patterns, data flow patterns and cached query text can be mined for malicious purpose. To be sure, database resource exploits do not represent the low hanging fruit, but equally as certainly most of the low hanging fruit has by now been plucked. Nonetheless, within the context of a well-considered defense-in-depth data security architecture securing database resource access is essential. Presuming adequate system obfuscation of buffers in the free pool, releasing resources held in memory will provide a layer of protection against exploits of SQL Server memory space.

From another perspective: only if the storage location is secured and encrypted would it be wise to leverage AUTO_CLOSE as a security layer. Anyone with read access to the storage location can copy the database files when cleanly shutdown. An un-encrypted database file can also be opened in EMERGENCY mode (READONLY) on another SQL Server – illustrating the value of encrypted storage.

Applications with a relatively low rate of change and highly sensitive data, such as the Encryption Hierarchy Administration T-SQL utility that provided example T-SQL for this series and some witness protection relocation databases are candidates for the anti-sniffing and anti-hijacking protections afforded by resource unloading. Furthermore, when TDE is also configured and database resources are unloaded, the most complete benefit for TDE can be achieved. Under such conditions there are no back-doors or alternative access paths that can circumvent the encryption.

I decided to put it to a quick test. The test output below shows the resource caching behavior around AUTO_CLOSE’s clean shutdown and OFFLINE under 3 configuration scenarios:

  1. AUTO_CLOSE ON
  2. AUTO_CLOSE OFF
  3. AUTO_CLOSE ON with Active Service Broker

Connecting to BWUNDER-PC\ELEVEN...
Set database context to 'master'.
Microsoft SQL Server 2012 - 11.0.2100.60 (X64) 
    Feb 10 2012 19:39:15 
    Copyright (c) Microsoft Corporation
    Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> 
(Build 7601: Service Pack 1)

CREATE DATABASE TestDb
OPEN MASTER KEY
CREATE DATABASE ENCRYPTION KEY
Warning: The certificate ~snip~ has not been backed up ~snip~
Set database context to 'tempdb'.
CREATE FUNCTION tempdb.dbo.fnServerResources
CREATE PROCEDURE tempdb.dbo.CheckResourcesFromTempDB
Set database context to 'TestDb'.
OPEN MASTER KEY
CREATE PROCEDURE TestDb.dbo.CheckDbResources
 
#1  SET AUTO_CLOSE ON                 -- database resources ----------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        3         2         0         2
Changed database context to 'master'.
wait a second...                             0         2         1         0
cleanly shutdown                             0         0         0         0
Changed database context to 'TestDb'.
initiate OFFLINE                             1         2         0         2
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
#2  SET AUTO_CLOSE OFF                -- database resources ----------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        1         2         0         2
Changed database context to 'master'.
wait a second...                             0         2         1         0
not shutdown                                 0         2         1         0
Changed database context to 'TestDb'.
initiate OFFLINE                             1         2         1         2
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
 Configure Service Broker
CREATE PROCEDURE TestDb.dbo.TestQActivationProcedure
CREATE QUEUE WITH ACTIVATION ON
CREATE SERVICE for QUEUE
CREATE EVENT NOTIFICATION to SERVICE
1 events enqueued
 
#3  SET AUTO_CLOSE ON                 --- database resources ---------------
                                      sessions   objects   q-stats     locks
try invoke AUTO_CLOSE                        2         3         1         3
Changed database context to 'master'.
wait a second...                             1         3         2         1
not shutdown                                 1         3         2         1
Changed database context to 'TestDb'.
initiate OFFLINE                             2         3         2         3
Changed database context to 'master'.
SET OFFLINE                                  0         0         0         0
SET ONLINE                                   1         1         0         2
 
Disconnecting connection from BWUNDER-PC\ELEVEN...

Unfortunately AUTO_CLOSE does not pass this sniff test. It simply is not reliable under many common SQL configurations. It is not consistently good at returning resources to a busy desktop or in closing keys. A persistent connection, a daemon process, a scheduled job, replication, mirroring or Service Broker activation – among other things – can interfere with a clean shutdown leaving the database memory work space and cache always available for malicious perusal. AUTO_CLOSE would too easily becomes a phantom security layer. You might find some comfort that it is enabled but you can never be certain that the protection is working.

The best way to be sure a database is shut down when idle is to take the database OFFLINE. That would also require a step to bring the database online before each use. Given that necessity, detaching the database would also work with the added advantage that the database reference is removed from any server scoped metadata in the catalog.

Posted in Encryption Hierarchies, Secure Data, Testing | Leave a comment