ballad for a data miner

First you save some tuples, then you lose your scruples
people help you along your way, not for your deeds, but how you say
to take their privacy and freedom, just remind ’em they don’t pay
take their money too on a ploy, use what you’ve learned from loggin’ all day

canto:
Cartesian aggregations, Poisson Distributions
causal correlations and standard deviations
You can’t sell your sexy underwear to little kids who watch TV bears
unless your cuddly cartoon cub convinces them that mom…. won’t… mind…
(con variazioni: money… is… love…, wrong… is… right…, yes… means… no…)

Analyzing, calculating, place your bets, stop salivating
map reduce then slice and dice, this new ‘gorithm is twice as nice
money making, manipulating, paying to play and kick-back taking
suffering fools but taking their payoffs, while spying on staff to cherry-pick the lay-offs

canto

Watching out for number one, what the hell, its too much fun,
to those that worked so tirelessly, Thank You Suckers! but no more pay
So many losers along the way, “stupid people” getting in your way
Is that an angry mob in your pocket? Is that a golden fob that you got on it?

canto

This corporation’s got no conscience and global economies won’t scale
when free is just a profit center and people cheap commodities
who’s choices are all illusions, fed by greed and false conclusion,
who’s purpose is to do your bid, fight pointless wars and clean your crapers, please

canto

vamp till que
First you save some tuples then you lose your scruples
People help you on your way not by your words but how you say

finale
Spent his last day in hole
won’t be going down any more
work for the man, live while you can
won’t be to long before your bound from this land

Posted in Privacy | Leave a comment

ad hoc T-SQL via TLS (SSL): Almost Perfect Forward Secrecy

The day the Heartbleed OpenSSL ‘vulnerability’ [don’t they mean backdoor?] hits the newswires seems an ideal moment to bring up an easy way to wrap your query results in an SSL tunnel between the database server and where ever you happen to be with what-ever device you happen to have available using node.js. (also see previous post re: making hay from you know what with node.js. And PLEASE consider this post as encouragement to urgently upgrade to OpenSSL 1.0.1g without delay! – uh..update March 3, 2015: make that 1.0.1k – see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-02 – and/or node.js v0.10.36 see https://strongloop.com/strongblog/are-node-and-io-js-affected-by-the-freak-attack-openssl-vulnerability/ – geez, will the NSA ever come clean on all the back doors they own?)

Heartbleed is clearly the disclosure of a probably intentional free swinging back door in open source software being poorly disguised as a vulnerability discovered after years in the wild. I’m afraid, “Oh, gee, I forgot to test that…” just doesn’t cut it when you talking about OpenSSL. That just made every one of us that has been advocating for open source software as a pathway toward the restoration of secure computing and personal privacy look like feckless dumb shits: as big o’ fools as those politicians from the apropos other party… You know who I talking about – the dumb-as-craps and the repugnant-ones or something like that… All classic examples of puppet politicians – as are we puppet software engineers – mindlessly serving the ‘good enough’ mentality demanded of today’s necessarily exuberant and young software engineers and as has been incumbent upon politicians throughout the times as humanity slogs this now clearly unacceptable – but nicely profitable for a few – course we travel… toward some glorious and grand self-annihilation – so they need us to believe anyway to justify the terminal damage they inflict upon the planet for self-profit.

In my estimation, the only lesson that will be learnt by proprietary software vendors and open source communities alike from the cardiac damage that OpenSSL is about to endure as a result of this little old bleeding heart will be to never admit anything. Ever. Some things never change.

OpenSSL just might not survive without the accountability that is established through full disclosure – at least about what really happened here but preferably as a community. Preferably a disclosure to provide compelling evidence that nothing else so sinister is yet being concealed. I doubt that can happen without full and immediate disclosure from every individual involved in every design decision and test automation script implemented or used during the creation, development and community review of that software. And I doubt any software organization or community would be able to really come clean about this one because – and I admit this is opinion based mostly on how I have seen the world go ’round over the last 60 years – maybe even a community building foundation of open source software such as OpenSSL can be ‘persuaded’ to submit to governmental demands and somehow also remain bound to an organizational silence on the matter? Prepare yourselves for another doozy from one of the grand pooh-bah – and real bad liars – from the NSA before all is said and done on this one.

May 9, 2014 – So far General Clapper has delivered as expected. On the tails of his April Fools Day admission of what we already knew: the NSA has conducted mass surveillance of  American Citizens without warrant or suspicion for quite a while, he first denied having ever exploited the OpenSSL buffer back door in a bald face lie that he stuck with for maybe a week or three, and now he is merely reiterating on older, but extremely disturbing, tactical right he has claimed before for the NSA to not reveal to even American and ally owners or to American and ally maintainers of open source code or hardware any exploitable bugs known by the NSA. All the owners and maintainers get to know about are the backdoors that were coerced to willingly implement. That is just plain outrageous. A standard for tyranny is established. I guess we should be at least glad that the pooh-bah has been willing to share his despotic rule – at least in public – with first “W” and then Bronco. Hell, Bronco even got us to believe that keeping the pooh-bah on his throne was a presidential decision. We will have to wait and see if he can tolerate Monica Bengazi I reckon.

I wonder if we will ever hear that admission of the ultimate obvious truth that the NSA is covertly responsible for the existence of the OpenSSL back door? This must scare the hell out of Clapper’s inner circle – whoever they might be? Once they are forced to admit the first backdoor it won’t be long before the other US Government mandated back doors to our privacy begin to surface and close. I have no doubt there will be a whole lot more colluding public corporations than just Microsoft, Apple and Google. I know it’s deep and ugly, but honestly have no idea just how deep and ugly. All I can see clearly is that there must be a good reason our Government has made such a big deal out of unrevealed backdoors planted for the Chinese Government in Hauwei’s network hardware…


I made the claim in the title that this technique is using ad hoc queries. That needs some qualification. Queries in the example code below are submitted asynchronously by a node.js https server running at the database server. The query is not exactly ad hoc because you must place the SQL in a text file for use by the node.js https server before starting the node server then you can execute the query from any browser with an IP path to the node server. While there is always a way to get to the text file and edit the query if need be, the idea described here is more useful for those ad hoc queries you run a few times over a few hours or days to keep an eye on something, then might never use again. The https server would only be of importance if there were sensitive data in the query results and you wished to avoid serving it on the network as clear text. If that is true, then the user interface you normally use is a better option where-ever you can use it. The node server lets you see the query result from any device with a browser, or from a ‘private’ browser session of someone else’s device with a browser.

SQLbySSL

An OpenSSL generated key for self-signing else a CA signed certificate on the database server is required before starting node. You could install the key and certificate in the local key repository, but that is not the method used here. Instead, a key and a certificate signing request are generated with OpenSSL. The key and self signed cert are kept in node.js server’s root folder. You may need to ignore an “unable to write ‘random state'” message from OpenSSL from key(1) and cert(3) generation. Keep in mind that when using a self signed certificate you must also click thru a browser warning informing you that the certificate is not signed by certificate authority (CA). A few modern browsers will not allow you to click through this screen so will not work here – stock Chrome, Firefox, Android and Safari work just fine. Also keep in mind that anyone that can get your key and certificate can decipher a cached copy of any bits you shoved through SSL tunnels built with that key and certificate. Guard that key closely.

three ways to self signed certificate that will encrypt a TLS1.2 tunnel
1. prompt for file encryption phrases and distinguished name keys
  //genrsa and similar are superceeded by genpkey openssl genrsa -out key.pem 1024
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024
  openssl req -new -key key.pem -out request.csr
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

2. no prompts - your distinguished name (DN)
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024 -pass pass:keyFileSecret
  openssl req -new -key key.pem -passin pass:keyFileSecret -out request.csr -passout pass:certFileSecret -subj/DC=org/DC=YABVE/DC=users/UID=123456+CN=bwunder -multivalue-rdn
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

3. one command - no request file - no prompts
  openssl req -x509 -newkey rsa:1024 -keyout key.pem -out cert.pem -passin pass:keyFileSecret -passout pass:certFileSecret -days 1 -batch 

The key used to generate the request is used to sign the request certificate . Certificate and key are saved as .pem files in the node.js server folder. You could even roll-ur-own perfect forward secrecy. That is to say, automate the generate and signing of a new key before every request. Not quite perfect but this could allow you to keep going in ‘manual mode’ with or without an urgent upgrade to close a risk that is not considered a risk when using perfect forward secrecy – at least until perfect forward secrecy is rendered ineffective in a few years.

Adding the one command key generation as the “prestart” script in the node’s  package.json will get you a new key each time you start the nodejs server.

You may need to allow inbound TCP traffic on the port serving SSL pages (8124 in the example) in your firewall if you want to hit the query from your smart phone’s browser or any remote workstation that can ping the database server on the port assigned to the https server – and present your Windows domain credentials for authentication unless you hardcode a SQL login username/password in the connection string (not recommended).

Speaking of which, Edge expects to find a connection string to the SQL Server in an environment variable where the node.exe is called before the node process thread is started.

SET EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=tempdb;Integrated Security=True

Lastly, when the node server is started you will be prompted at the console to enter a PEM password. It is not clear from the prompt but this is the phrase you used to encrypt the certificate file. I used ‘certFileSecret’ in the example above.

Happy Heartbleed day!


/*
  npm install edge
  npm install edge-sql
*/
var edge =require('edge');
var sys =require('sys');
var https =require('https');
var fs =require('fs');

var port = 8124;
var options = {
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
};

var sqlQuery = edge.func('sql', function () {/*
  SELECT top(10) qs.total_worker_time AS [total worker time]
     , qs.total_worker_time/qs.execution_count AS [average worker time]
     , qs.execution_count AS [execution count]
     , REPLACE(
         SUBSTRING( st.text
                  , ( qs.statement_start_offset / 2 ) + 1
                  , ( ( CASE qs.statement_end_offset
                        WHEN -1 THEN DATALENGTH( st.text )
                        ELSE qs.statement_end_offset
                        END - qs.statement_start_offset ) / 2 ) + 1 )
         , CHAR(9)
         , SPACE(2) ) AS [query text]
  FROM sys.dm_exec_query_stats AS qs
  CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
  ORDER BY total_worker_time DESC;
*/});

// listens for query results to dump to a table
https.createServer( options, function( request, response ) {
  sys.log('request: ' + request.url );
  if ( request.url==='/' ) {
    sqlQuery( null, function( error, result ) {
      if ( error ) throw error;
      if ( result ) {
        response.writeHead( 200, {'Content-Type': 'text/html'} );
        response.write('<!DOCTYPE html>');
        response.write('<html>');
        response.write('<head>');
        response.write('<title>SQLHawgs</title>');
        response.write('</head>');
        response.write('<body>');
        response.write('<table border="1">');
        if ( sys.isArray(result) )  {
          response.write('<tr>');
          Object.keys( result[0] ).forEach( function( key ) {
            response.write('<th>' + key + '</th>');
          });
          response.write('</tr>');
          result.forEach( function( row ) {
            response.write('<tr>')
            Object.keys( row ).forEach( function( key ) {
              if (typeof row[key]==='string'&&row[key].length >=40 ) {
                response.write('<textarea DISABLED>' + row[key] + '');
              }
              else {
                response.write('<td>' + row[key] + '</td>');
              }
            });
            response.write('</tr>');
          });
        }
        else  {
          Object.keys( result[0] ).forEach( function( key ) {
            response.write( '<tr><td>' + key + '</td><td>' + result[0][key] + '</tr>');
          });
        }
        response.write( '</table>' );
        response.write( '</body>' );
        response.write( '</html>' );
        response.end();
      }
      sys.log("rows returned " + result.length)
    });
  }
}).listen(port);

sys.log('listening for https requests on port ' + port);

Posted in Privacy, Secure Data | 3 Comments

Making JSON Hay out of SQL Server Data

Moving data in and out of a relational database is a relentless run-time bottleneck. I suspect you would agree that effecting metadata change at the database is even more disruptive than the run of the mill CRUD. I often hear the same [straw] arguments for a new cloud vendor or new hardware or new skill set or a code rewrite to relieve throughput bottlenecks. But what if what you really need is a new data store model? What if you have been building, and rebuilding, fancy databases as cheaply as possible on scenic oceanfront property beneath a high muddy bluff across the delta of a rushing river from a massive smoking volcano in the rain? That is to say, maybe an RDBMS is not quite the right place to put your data all [most?] of the time? Maybe… just maybe… SQL Server – and Oracle and PostgreSQL – are passé and the extant justifications for normalization of data are now but archaic specks disappearing into the vortex of the black hole that is Moore’s Law?

On the off chance that there is some truth to that notion I figure it behooves us to at least be aware of the alternatives as they gain some popularity. I personally enjoy trying new stuff. I prefer to take enough time examining a thing so that what I am doing with it makes sense to me. In late 2012 the open source MongoDB project caught my attention. I was almost immediately surprised by what I found. Intelligent sharding right out of the box for starters. And MongoDB could replicate and/or shard between a database instance running on Windows and a database instance running on Linux for instance, or Android or OSX [or Arduino or LONworks?]. And there was shard aware and array element aware b-tree indexing, and db.Collection.stats() – akin to SQL Server’s SHOWPLAN. Even shard aware Map-Reduce aggregations so the shards can be so easily and properly distributed across an HDFS – or intercontinental cluster for that matter – with ease. And tools to tune queries!  I was hooked in short order on the usability and the possibilities so I dug in to better understand the best thing since sliced bread.

The “mongo shell” – used for configuration, administration and ad hoc queries – lives on an exclusive diet of javascript. Equally easy to use API drivers are available from MongoDB.org for Python, Ruby, PHP, Scala, C, C++, C#, Java, Perl, Erlang and Haskell. There is more to the API than you find with the Windows Azure or AWS storage object, or Cassandra or SQLite for that matter, but still not as much complexity for the developer or waiting for results for the user as is invariably encountered with relational models.

In the course(s) of learning about the API and struggling to remembering all the things I never knew about javascript and the precious few things I never knew I knew about javascript I found myself working with – and schooling myself on – node.js (node). Node is a non-blocking single threaded workhorse suitable for administrative work and operational monitoring of servers, smartphones, switches, clouds and ‘the Internet of things‘.  The mongo shell is still the right tool for configuration, indexing,  testing and most janitoral grunt work at the database. Unlike node, the shell is not async by default and not all of the lowlevel power of the mongo shell is exposed through the APIs. Nonetheless, node uses the native javascript MongoDB API. And I must say that having the application and the db console in the same language and using the exact same data structures is huge for productivity. Minimal impedence for the DBA to force the mental shift between mongo server and node callbacks. Virtually no impedance for the developer to mentally shift between app layers and data layers!

Perhaps node.js is a seriously excellent cross platform, cross device administrative tool as I believe, but I can only guarantee that it is fun. It is an open source environment with potential beyond any I can imagine for Powershell or ssh. Packages that expose functionaity through javascript libraries and written in  C, C++, C#, Java and/or Python   exist for connection to node. node makes no bones that MongoDB is the preferred data store. I am by no means a data store connoisseur though I have snacked at the NoSQL corner store and stood in the Linux lunch line often enough to feel entitled to an opinion. You’ll have to take it from there.

FWIW: 10gen.com, MongoDB’s sponsor corp has a real good 6 lesson course on-line for kinesthetic learners that will give you a journeyman skills with MongoDB and get you started with node.js. Mostly hands on so there is a lot of homework. And its free.

Update April 22, 2015: While checking out the otherwise mediocre Node js Jump Start course on the Microsoft Virtual Academy I did come upon another resource that might be better suited to the visual learner: The Little MongoDb Book by Carl Seguin available as a no-cost pdf – or in low-cost hardcopy at your favorite book store. 

To otherwise help ease your introduction – if you decide to kick the MongoDB tires using node.js and you are a SQL DBA – I provide an example data migration below moving a SQL Server model into a MongoDB document that you can easily set up locally or modify to suit your data. Most of the work will be completing the download and installation of the few open source software libraries required.

For data here I use the simplified products table hierarchy from the AdventureWorksLT2012 sample database available from codeplex. product is easily recognizable as what I will call the unit of aggregation.

The unit of aggregation is all data that describes an atomic application object or entity. From the already abstracted relational perspective one could think of the unit of aggregation as everything about an entity de-normalized into one row in one table. In practice, many relational data models have already suffered this fate to one extent or another.      

In the AdventureWorksLT database I see three candidates for unit of aggregation: customers (4 tables), products (6 tables) and Sales (2 tables, parent-child – the child is probably the unit of aggregation). Product is interesting because there are nested arrays (1 to many relationships) and a grouping hierarchy (category). Here is a diagram of the SQL Server data:

ProductsDbDiagram

This is loaded to into a collection of JSON documents of products with the following JSON format. The value (right) side of each ‘name -value’ pair in the document indicates the table and column to be used from the SQL data.

[ 
  {
    _id : Product.ProductID,
    Name: Product.Name,
    ProductNumber: Product.ProductNumber,
    Color: Product.Color,
    StandardCost: Product.StandardCost,
    ListPrice: Product.ListPrice,
    Size: Product.Size,
    Weight: Product.Weight,
    SellStartDate: Product.SellStartDate,
    SellEndDate: Product.SellEndDate,
    DiscontinuedDate: Product.DiscontinuedDate,
    ThumbNailPhoto: Product.ThumbNailPhoto,
    ThumbNailPhotoFileName: Product.ThumbNailPhotoFileName,
    rowguid: Product.rowguid,	
    ModifiedDate: Product.ModifiedDate,
    category: 
      {
        ProductCategoryID: ProductCategory.ProductCategoryID,
        ParentProductCategoryID : ProductCategory.ParentProductCategoryID,
        Name: ProductCategory.Name,
        ModifiedDate: ProductCategory.ModifiedDate 	
      }
    model:
      {
        ProductModelID: ProductModel.ProductModelID,
        Name: ProductModel.Name,
        CatalogDescription: ProductModel.CatalogDescription ,
        ModifiedDate: ProductModel.ModifiedDate 	
        descrs: 
          [
            {
              ProductDescriptionID: ProductModel.ProductDescriptionID, 			
              Culture: ProductModelProductDescription.Culture,
              Description: ProductDescription.Description,
              ModifiedDate: ProductDescription.ModifiedDate 	
            }
            ,{more descrs in the square bracketed array}... 
          ]
      }
   }
   ,{more products - it's an array too}... 
 ] 

The code is executed from a command prompt with the /nodejs/ directory in the environment path . I am using node (0.10.25) on Windows Server 2012 with SQL Server 2012 SP1 Developer Edition at the default location and MongoDB 2.2.1 already installed prior to installing node. SQL Server is running as a service and mongod is running from a command prompt. I am using only Windows Authentication. For SQL Server access I am using the edge and edge-sql npm packages. edge asynchronously marshals T-SQL through the local NET framework libraries and returns JSON but only works with Windows.

edge-sql result sets come back to the javascript application as name-value pairs marshaled from a .NET ExpandoObject that looks and smells like JSON to me. The work left after the queries return results is merely to assemble the atomic data document from the pieces of relatioinal contention and shove it into a MongoDB collection. This all works great for now, but I am not totally convinced that edge will make the final cut. I will also warn you that if you decided to start adapting the script to another table hierarchy you are will be forced to also come to understand Closures and Scope in Javascript callbacks. I hope you do. It’s good stuff. Not very SQLish though.

/*
  npm install mongodb
  npm install edge
  npm install edge-sql

  edge expects valid SQLClient connection string in environment variable
  before node is started.
  EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=AdventureWorksLT;Integrated Security=True

  edge-sql is a home run when aiming SQL data at a JSON target because
  you just supply valid T-SQL and edge will return the ADO recordset
  as a JSON collection of row objects to the scope of the query callback.   	

  edge-sql for a production system is a strike out (w/backwards K)
  1. returns only the first result requested no matter how
    many results produced
  2. the javascript file containing edge.func text is vulnerable to
    SQL injection hijack by adding a semicolon followed by any valid T-SQL
    command or statement provided first word in edge.func callback comment
    is insert, update, delete or select (not case sensitive)
  STEEEEERIKE 3. the connection is made with the security context of the
    Windows user running the script so database permissions and data can
    be hijacked	through an attack on the file with an edge.func('sql')
*/

var edge = require('edge');
var mongoClient = require('mongodb').MongoClient;

var mongoURI = 'mongodb://localhost:27017/test';

// named function expressions (compile time)
// you paste tested SQL queries in each functions comment block
// edge parses it back out and executes as async ADO    
var sqlProductCursor = edge.func( 'sql', function () {/*
  SELECT  ProductID
        , ProductCategoryID
        , ProductModelID
   FROM SalesLT.Product;
*/});
var sqlProduct = edge.func( 'sql', function () {/*
  SELECT  ProductID AS _id
        , Name
        , ProductNumber
        , Color
        , StandardCost
        , ListPrice
        , Size
        , Weight
        , SellStartDate
        , SellEndDate
        , DiscontinuedDate
        , ThumbnailPhoto -- varbinary MAX!
        , ThumbnailPhotoFileName
        , rowguid
        , ModifiedDate
  FROM SalesLT.Product
  WHERE ProductID = @ProductID;
*/});
var sqlProductCategory =  edge.func( 'sql', function () {/*
  SELECT ProductCategoryID
       , ParentProductCategoryID
       , Name
  FROM SalesLT.ProductCategory
    WHERE ProductCategoryID = @ProductCategoryID;
*/} );
var sqlProductModel = edge.func( 'sql', function () {/*
  SELECT ProductModelID
        , Name
        , CatalogDescription
        , ModifiedDate
  FROM SalesLT.ProductModel
  WHERE ProductModelID = @ProductModelID;
*/});
var sqlProductModelProductDescription =
  edge.func( 'sql', function () {/*
    SELECT 	pmpd.ProductDescriptionID
          , pmpd.Culture
          , pd.Description
          , pd.ModifiedDate
    FROM SalesLT.ProductModelProductDescription AS pmpd
    LEFT JOIN SalesLT.ProductDescription AS pd
    ON pmpd.ProductDescriptionID = pd.ProductDescriptionID
    WHERE ProductModelID = @ProductModelID;
*/});		

mongoClient.connect( mongoURI, function( error, db ) {
  db.collection('products').drop();
});	 

mongoClient.connect( mongoURI, function( error, db ) {
  sqlProductCursor( null, function( error, sqlp ) {
    if ( error ) throw error;
    for (var i=0; i < sqlp.length; i++) {
      ( function (j) {
          sqlProduct (
            { "ProductID" : j.ProductID },
            function ( error, product ) {
              sqlProductCategory (
                { "ProductCategoryID" : j.ProductCategoryID },
                function ( error, category ) {
                  sqlProductModel (
                    { "ProductModelID" : j.ProductModelID },
                    function ( error, model ) {
                      sqlProductModelProductDescription (
                        {	"ProductModelID" : j.ProductModelID },
                        function ( error, descrs ) {
                          model[0].descrs = descrs;
                          product[0].category = category[0];
                          product[0].model = model[0];
                          db.collection('products').insert( product ,
                            function( error, inserted ) {
                              if (error) throw error;
                       		  });
                        });	// descrs
                    }); // model
                  }); // category
            }); // product
        })(sqlp[i]); // product closure
      }
    });
});	 

That’s all there is to it.

Posted in NoSQL | Leave a comment

Background Checks for EVERYBODY!

A background check simply filters and formats personal information about an eating, breathing person into a somewhat standard and therefore, presumably, useful “packet”. Much of the information in a background check is already out there in the public domain. Most of the rest is already controlled and/or owned by the government. What is missing is the true and just application of filters and formats – the so-called algorithms – needed to organize and maintain said “packets” as useful to humanity.  

There are algorithms in use. Primarily by marketers, but also governments and other well funded cartels, though we are offered no transparency, and the accounts that do manage to “leak” to within the public’s earshot suggest any claimed intention is at least dubious and too often, just another scene in a bad but never-ending slapstick of bafoonery.  The place(s) that sold and shipped thousands of rounds of ammo to a disturbed individual that would months later, commit mass murder after reaching out to mental health professionals for help. I can only wonder if the determination of the health care system and the public safety agency(s) to keep this guy off the street might have been enough to prevent the deaths of innocent movie-goers in Aurora Colorado.   

Anyone that has never done so might be shocked to see what anyone can learn about them at web sites that traffic in other peoples personal details. Anyone willing to pay the low, low price can look deeply into your background without your permission. To be sure, there are a number of web sites where anybody in the world can pay a few bitcoins to see a disturbing amount of information about you. That is to say: obtain your most vital personally identifiable information (PII) without leaving a trace. 

US Government agencies like the IRS, NSA, FBI, CIA and ATF; industrial surveillance engines like Google, Bing and Yahoo as well as the myriad of cookie powered marketing and transaction data crapitalists, not to mention the massive offline archives of banks, insurance companies, retailers and wholesalers that routinely accumulate giga-scads of data useful to check a person’s background. Ever so slowly, the players are sharing select bits of this information to make a buck but so far everyone still totally sucks at cooperating to render this information useful to humanity opting instead for self interests (e.g., profit, criminality, manipulation of the public, fear of reprisal, redress, revenge, etc…).

As I have previously blogged regarding Government Agency’s mandates for unfettered, unquestioned and, as I remain convinced, un-American access to ‘pen and tap’ data in every data center and central office in the country. What they cannot take for the asking or with a little sometimes heavy-handed coercion, they just take. Government regulations intended to help may be making the situation worse. HIPPA, for example, sorta standardizes health information and compels health care providers to store your medical records in a supposedly secure but professionally shareable electronic form. Combine that with the now widely suspected to be hacked by government agencies cryptography and the downstream government agencies are surely licking their chops over this pool of easy data and working quietly behind the scenes to make sure what-ever happens, the backdoor will always be open to them. Security is fine. Would be even better if it actually worked though, and we know this doesn’t… So why isn’t there a plan to be of genuine service to the people this data is about? That’s all I want to know.

Government data proper can often be classified “public information”. Yet we the public have to know who to ask or pay – and too often it seems the secret word and/or the appropriate political alignment – to see it. Even among and between government agencies there exists no mandate to transparently share data.

Stores routinely collect video surveillance and purchase transaction data. Each will process, archive, aggregate and perhaps share the collected data in what-ever way has been decided by the management.

Many 24-7 news operations exist at the local, national and international levels.

The data is already out there. What is missing is a way of using this already collected and in many cases even already aggregated data so that the right decisions can be made at the right time. Even worse, the impetus for politicians remains too much about what not to tell the constituency. The real issue here is transparency not a fair and just background checking algorithm. The people calling the shots don’t like the heat in that kitchen.

The Artificial Intelligence (AI) and Business Intelligence (BI) tools now in use in union with the data already being collected are enough to implement fair and just algorithms to determine such things as who should or should not buy a gun – and assist with implementation planning that doesn’t end in a shoot-out well before the first shot has been volleyed.

A background check must assure the greatest measure of accuracy, accountability and transparency possible from the body of information about a person that is already in the lawfully shareable domains. With transparency in background checks anyone and everyone must be able to find all background information about themselves, where that information came from, and a history of all previous reads. The person could then work to correct any errors – and weaknesses – that show up in their own background. A person would get a de facto background check every time they created an entry or update in the shareable data set. Furthermore, mental health warning signs could be followed up as thresholds of concern are in peril rather than as interventions of greater concern.

Make no mistake, EVERYBODY’s PII data is out there in the wild right now. The tyranny is that the data, who uses it and how it is used can be lawfully kept from that person’s view and beyond their control. It can easily be used to persuade or deceive that person and as easily be manipulated – by someone who may or may not know anything about that person – to deceive other’s.

Still and all, imagine how anyone might react when they ‘fail’ a background check when trying to buy a gun. Especially if they might already suffer with mental health issues or are already be wanted for past crimes. Point-of-sale background checks that actually do what they are intended to do would surely make a bad situation worse – at least some of the time – when doled out as a pass or fail ticket awarded to one citizen waiting in line but not the next, especially if the ‘fail’ meant “no gun today” and the persons brought the the intention to do harm along with them to the gun seller. Couple that with the facts that most gun buyer’s in the US today are not buying their first gun: they are already armed and that an unknown number of guns trade hands in private and possibly black market exchanges.  (When going after a problem where too many people are dying because too many people have gun, the most laughable solution is to give more people guns. I mean,… what could go wrong?)

Imagine, from yet another perspective, how it might affect the vote if voters could query  candidate background data only to find that a favorite politicians had failed to disclose a long history of mental illness or abuse or even which – if any – of the judges you are supposed to approve at each election were blatantly corrupt or a prescription drug abusers.

Imagine if those responsible for hiring our teachers and police had similar information when evaluating teaching candidates or even the longest tenured educators. And imagine we had the same information about teachers and police as the people that hire teachers and police? And teachers and police had the same information about us. And we had access to that information about our Doctor or car mechanic or date? Shouldn’t everyone be exposed to the same level of scrutiny? NRA spokes person Wayne La Pierre? President Barack Obama? You? Me?! It’s actually way too late to dicker over who should get such scrutiny. It’s happening now to everyone but not equally and without transparency or proper oversight. It’s also well protected by widespread denial.

Still and all, I do wonder what would become of those of us who don’t meet the background check sniff test at some point. After living in this society where people prefer not to know their neighbors all my life, that truth could be so disturbing that even greater chaos ensues. So many are now armed with assault weapons and so few police have the training and skills required to recognize let alone council a person safely through a mental health crisis that there probably is not a lot anyone can do about guns already in the wild for generations without something as drastic as a massive weaponized domestic drone campaign to ‘take away the guns’ from the cold dead droned hands of those labeled “should not have guns”. More realistically, mental health interventions will remain a point of difficulty that will require the minds of our best trained and most skillful scientists, clergy and communicators.

It is beyond belief that the political system has even been discussing a plan that could so obviously end with a “take his guns away” order followed by desperation and then, too often, a shoot-out. Ordered revocation will not work any better than chasing the homeless out of town (again) or the prohibition of alcohol or hemp.

And that seems to be where the conversation is now stuck for eternity. Some folks seem to believe that background checks are a waste of time and won’t help a thing. The other side says background checks are not enough and even more rules and regulations are necessary convinced that the type of weapon foretells a propensity for misuse. Meanwhile the politicians provide us the predictable dis-services of misinformation, stonewalling and feckless bullshit in the hope that nothing changes other than an ever increasing number of zeros in their bank balance. 

To get the conversation moving perhaps we have to stop blaming any narrow slice of the population (e.g., gun buyers, disturbed individuals, terrorists, tree huggers, religious fundamentalists or even corrupt politicians) as the source of a systemic problem and in so doing, erroneously see prevention as the elimination of certain stereotypical yet otherwise lawful abiding persons from the population. As far as I can tell, criminals, haters and lunatics will find ways to do their deeds whether or not they can legally buy a gun or even use a gun. We need a system that ameliorates the aberrant behaviors before there become headlines of death and disaster. Continuous cradle-to-grave background checks coupled with qualified trans-personal counselor to reach out to others regularly to help us understand what our background check is signaling is essential. Unfortunately, those in positions of power would place too much of their power at risk under such a structure simply because EVERYBODY has a few “red flags” in their background.  

The American public is apparently around 90% behind the need for better background checking of potential gun owners. Seems to me like everyone in the county is a potential gun owner and can easily circumvent ATF regulations just like always. Doesn’t that potential technically make everyone a candidate for a pre-purchase background check? And since anyone can buy a gun at any time isn’t it important to maintain that background check? It could be entirely possible to buy a gun anytime, and why on earth should we wait until someone is buying a gun to help them with the issues identified in their background data? Shouldn’t we be all we can to protect our communities from those risks easily uncovered through background check?

All political systems of our time may be already too corrupt to even give lip service to a solution based upon transparency, accountability and human dignity. That is but one disadvantage of the bought-and-paid-for oligarchies we now suffer and the financially driven, politically biased media that would so quickly loose ratings if accountability and transparency of background checks were the rule for everybody. Probably, politicians would be the biggest losers. A reasonable compromise among all or even most elected officials is not possible in this time when everything but Wikileaks happens behind closed doors. Background checks must be transparent to be beneficial to humanity. Backroom deals bargain away the posterity for all for the self interests of the politicians and their buddies.

Background checks are already as much a part of life as death and taxes. The problem is current background checks are sloppy, incomplete, inconsistent and highly susceptible to producing corrupt results. I bet city cops, for example, do get a much different background check than seasonal city workers. But not at all am I convinced the cop gets the better or more informative check of the two.

What we really need to do is agree on what needs to be in a background check and then go about the business of compiling and checking backgrounds consistently throughout the population. What ever it is, I have no doubt that the US Government already has more than enough access to personal information to do this work. The Generals in charge of this access appear to not even need the OK of legislators or citizens to do this work. But instead of doing this work we are witnessing a very different militarization of law enforcement in this county. A militarization apparently meant to entertain: to show the people they have nothing to fear through media plays of shock and awe or some such thing.  A militarization that has not been instrumental in making us any safer but showcases an impotent law enforcement supported mostly by clever videographers seeking to compose the most dangerous or scary video among the many videographers covering the situation and eloquent spokespersons to spin the truth in the desired direction.

Consider the Boston Bombing. Authorities failed in every way to identify or seek to isolate a risk of bags full of bombs scattered about the finish area as a real risk before the instant of attack. There was well known but not specific known risk, yet there was no preparation or screening to prevent such a simple attack. They relied upon video after the fact, collected tediously from each local business that had been previously compelled to install surveillance camera’s for self protection against [lesser?] crimes to identify the bombers. It took all the next day for authorities to filter the data and come up with a couple of freeze frames of video deemed ‘safe’ enough to release to the public. Then authorities had to crowd source the identity of those in the images – but just enough picture to get that name. All the while, authorities maintained a clear strangle-hold on the media covering the event. This demonstrates once again – and without any doubt – that the government has the technology and the data access to quickly and extensively check any person’s background once they have the person’s name. Unfortunately the late and well orchestrated photo release backfired when the bombers were shown that their covers ware blown and so thought it best to make a run for it. duh. That could have been the end of it but one of the two escaped after firing 200+ rounds plus lobbing two bomb in a battle with police instigated by the criminals en route out of town. The authorities then continued with a heavily armed show of military force as they spent a long hard fruitless day searching door to door to door in combat gear. The camouflage uniformed automatic weapon toting combat cops and armored and camouflaged vehicles in Boston were all over the TV in Colorado that day, all day. I was certainly hypnotized to and horrified by the media coverage. Finally the authorities called off the search empty handed at the end of the day and vacated a ‘sheltering in place’ order for a million people in Boston – even though it was not known to be any more or less safe. But involving the people was once again that little flash of transparency the situation needed. A citizen found the bomber in his back yard, well outside of the search area. Hours later the bomber was finally arrested – but only after government agencies had unloaded a couple of clips from police assault weapons into the hiding place of what turned out to be an already unarmed and already badly wounded criminal. All the next day the spokespeople of the various authorities took turns telling the TC camera what a great job the authorities had done.

Now that the event seems past I can only wonder how much faster this tragedy could have been brought to a conclusion if the police were transparent and accountable instead of operating as splintered secret military and para-military operations? Or will now do anything that might improve their chances of preventing bags full of bombs from being spread around the finish line of the next Boston Marathon?

I am certainly not trying to defend these bombers. I am saying that background checks for everybody already shows promise on those rare occasions when given a chance to work. And I’m saying that everyone already does get background checks that they are not aware. Granted, I’m talking about an explicit ongoing background check with complete transparency and full accountability. The government would at last get to stop pretending they do not invade the privacy of citizens inappropriately.

The development and recurring revenue potential for universal background checks alone may be among the most impactful governmental concessionaire’s opportunities of all time! Not that I advocate for a Big Brother state. I just think we ought to use the data already being collected and aggregated – and so therefore already mandated in accordance with CALEA to be available for the pleasure and [mis]use of law enforcement to also benefit those the data is about. Who better than those already playing by CALEA rules to lead the push? I want essentially for the access and knowledge already expended by the agencies and the corporations they do business with about my background to be extended also to me and to allow me to extend it to others I have determined have a bona fide reason to peer into my background  And of course, I want to know who looks at my background. I believe that the right way to ‘enforce’ background checks is through trans-personal counseling. If it all happens before a crime is committed, why even involve the police? For that matter why pretend that counseling a person to help them avoid trouble is equivalent in any way to criminal enforcement actions.

Maybe all we have to do is turn caring about others into a guaranteed revenue stream and capitalism will magically protect us from homegrown terrorism? I admit though, there is $omething about that idea that does not fill me with hope but I cant quite lay my hands on enough of it…

Seriously, the time to make contact with a person that has done something – or enough somethings – that might disallow them to buy a gun or bullets or even a weapon repair part is at the moment they should no longer be allowed to do such things. Waiting until they have an intention to act so want to buy a gun to tell them they cannot buy the gun does not solve any problem before creating another, potentially even more contentious problem.

I would much rather see law enforcement equipped with the truth as born by my background – and also to be required to live among those they police – than the aloof militarized militias we now see so coldly clubbing, shooting and spraying large throngs of unarmed non-violent protesters, most often youthful and frequently not white, who gather in protest of outrage at the many and blatant injustices of the time. I also note that the cops have an impressive kill rate for old men supposedly always “angry old men” and typically “barricaded” in their own homes, so I keep a low profile. Perhaps I am most frustrated to see these over-funded highly secretive militarized agencies only ever able to arrive on the scene after the evil ones among us have acted with such terrible consequence to so many innocents. Seems like gross overkill for that sort of a mop-up operation, but damn! don’t they look badass on TV.

Posted in Privacy | Leave a comment

Tails from a Diskless Hyper-V

The Amnesic Incognito Live System (Tails) is open source privacy software. Tails “helps you to use the Internet anonymously almost anywhere you go and on any computer but leave no trace…”. This post explores how a Windows private cloud or data center might leverage Tails to harden defense in depth.

Tails is a Debian Linux .iso rootkit – er I mean boot image – configured to enable peer-to-peer encryption of e-mail messages and IM messages plus The Onion Router’s (Tor) anonymous SSL web browsing. Tor’s add-ins set [java]scripting and cookies off for all web sites by default, although the user can elect to allow scripts or cookies on a per site basis. The recommended way to use Tails is to burn a verified download of the .iso on to a write-once DVD and then use that DVD as the boot device to start the computer. Tails is mostly designed and configured to leave no trace in that scenario and to assure that once the verified image is laid down on a DVD it cannot be changed. 

One limitation of this preferred scenario is that you need to reboot the machine a couple of times each time you use Tails. Once to boot and once to remove the footprint left behind in memory.

Another limitation is that a DVD drive may not be readily available or accessible when needed. Tails developers suggest an almost as secure USB alternative to the DVD, but caution that an ability to surreptitiously modify the kernel is introduced. Tails also allows the user to manually configure local storage, opening a potential security hole. Local storage is needed, for example to load cryptographic keys for the secure OTR IM and PGP email messaging apps included for peer to peer privacy. Tails does automajically configure a piece of it’s memory as a RAMdisk allowing keys to be introduced without persistence in theoryVirtualization too, I propoase, can remove the reboot overhead, however the Tails documentation cautions against running Tails as a virtual machine (VM). “The main issue,” they say, “is if the host operating system is compromised with a software keylogger or other malware.” There simply is no facility for the VM to be sure no such spyware exists on the host. The usage I am suggesting below is the inverse of that trust model. Here we will use Tails to isolate the trusted host’s Windows domain from the Internet leveraging virtualization to help preserve the integrity of the trusted node. From a practical stand point, a better rule of thumb – though still in line with the cautious Tails statement on virtualization – may be to trust a virtual environment only to the extent you trust the underlying host environment(s) that support the virtual machine.

Nov 8, 2015 note – Unfortunately, the growing concerns that Tor is compromised are legitimate:

          https://invisibler.com/tor-compromised/

http://www.idigitaltimes.com/best-alternatives-tor-12-programs-use-nsa-hackers-compromised-tor-project-376976

Also, for virtualization other than Hyper-V see this information about Tails and virtual machine security at boum.org:

   https://tails.boum.org/doc/advanced_topics/virtualization/index.en.html 

A Windows Domain in a physically secured data center implies that the Domain and the data center Ops and admin staff are trusted. But when you open ports, especially 80/443, into that Domain that trust is at increased risk. Given Hyper-V administrator rights on a Windows 2012 Server – but not while logged in with administrative system rights on the server, using Tails from a virtual machine might just be a safer, more secure and self maintaining usability enhancement for a Windows-centric data center or private cloud. 

  • Tails can eliminates many requirements that expose the Windows Domain to the Internet. Internet risks are sandbox-ed on the Linux VM. The Linux instance has no rights or access in the Domain. The Domain has no rights or access to the Linux instance other than via Hyper-V Manager. Most interestingly, Tails boots to a Virtual Machine that has no disk space allocated (other than the RAM disk already mentioned).    
  • Tails will thwart most external traffic analysis efforts by competitors and adversaries. DPI sniffers and pen register access in the outside world will only expose the fact that you have traversed the Internet via SSL traffic to the Tor Network. SSL will prevent most snooping between the VM and the onion servers. No more than a handful of governments – and a few other cartels with adequate processing power – will even have the ability to backdoor through the Certificate Authority or brute force the SSL to actually see where you are going on the Internet.     
  • The Tails developer’s take care of the security updates and other maintenance. To upgrade or patch when used in the read-only diskless Hyper-V configuration, all you need do is download the latest image file. 

Some organizations may be resistant to this idea because Tails will also allow employees to privately and anonymously communicate with the outside world while at work. True enough, the Tor pen register data will simply not provide adequate forensic surveillance detail to know what data center employees are up to. That alone could put the kibosh on Tails from a Diskless Hyper-V.

Organizational fear of employees not withstanding, Tails in a Windows data-center presents a robust security profile with excellent usability for those times when the knowledge available on the Internet is urgently needed to help solve a problem or help understand a configuration. I would discourage efforts to configure a backdoor to monitor actual TAILS usage from the host simply because once the back door is opened anybody can walk through and digital back doors swing both ways: better to put your monitoring energy into making sure there is no back door.

Tails is easy to deploy as a Hyper-V VM on Windows Server 2012 (or Windows 8 Pro with the Hyper-V client):

  • download and verify the file from https://tails.boum.org. No need to burn a DVD. Hyper-V will use the .iso file, although a DVD would work too if that is preferred and will undeniably help to assure the integrity of the image. A shared copy of the iso can be used across an environment. It is necessary to ensure that the VM host computer’s management account and the user account attempting to start the VM have full access to the file share and/or file system folder of the image . 
  • add a new Virtual Machine in Hyper-V Manager.TailsVM
  1. Give the VM 512MB of memory (dynamic works as well as static)
  2. Set the BIOS boot order to start with “CD”
  3. Set the .iso file – or physical DVD drive if that option is used – as the VM DVD Drive.
  4. Configure the VM with a virtual network adapter that can get to the Internet.

May 5, 2014 note – I had to enable MAC spoofing in HyperV for the Internet Network Adapter when I use the newly released tails version 1. The checkbox is located on the Advanced Features of the Network Adapter of the VM. You will not find the Advanced Features option when accessing the Virtual Switch.  It is a setting of the Network Adapter assigned to the tails VM. I suppose another option would be to remove the MAC address hardwired into tails’ “Auto eth0” but also would reduce your anonymity. It works this way but that is all the testing I did on it! Use the hardwired MAC if possible. 

  • Start the VM and specify a password for root when prompted. You will need to recreate the root password each time you start the VM in the diskless VM configuration. It can be a different password for each re-start. You should still use a strong password. The isolation of the Internet from the local Domain is dependent upon the security of this password. You never need to know that password again after you type it twice and you don’t want anyone else to know it either… ever.
  • Use the Internet privately and anonymously from your shiny new diskless Virtual Machine.TailsVMDesktop

Iceweasel can browse the Internet just fine in the diskless configuration. Using PGP or OTR, however, both require persisted certificates. That requires disk storage. Instant Messenger and pop email using the tools in Tails won’t happen unless persistent certificates are available. There are probably a number of ways certificate availability can be realized, e.g., RamDisk, network, fob, etc.

A Hyper-V Administrator cannot be prevented from configuring storage inside the Virtual Machine if storage is available to the Hyper-V. (Hint: A Hyper-V Administrator can be prevented from configuring storage inside the Virtual Machine if no storage is available to the Hyper-V user.)

Not a total solution, but gives a very clean ability to jump on the Internet without exposing the domain to the Internet when needed.

Posted in Privacy, Secure Data | 2 Comments

For Privacy Open the Source & Close the Back Door

There is no surprise in the many recent corporate self-admissions that they too have given our private information. After all, they got us to release our privacy to their care with barely a flick and a click. As a direct consequence – and without need of oversight through lawful warrant or subpoena – Internet service providers (ISP) and tele-communications service providers are compelled to release our pen registers, profiles, email and stored files to host location authorities (e.g., local, state and federal agencies) everywhere in the world when requested. The corporations can, will and have freely, willing and routinely provided our private data, stored on their servers or clouds, upon request. And any will decipher our encrypted private data to assist such surveillance if they can. It is all done with our expressed permission.

A 2012 study found that we each would have to spend about 200 hours a year (with a calculated cost of $781 Billion to GDP) to actually read all the privacy policies we have accepted. At the same time, the word count in privacy policies is going up, further reducing the likelihood that they will be read and understood. In my opinion, the so purposed design of privacy policies – to make it so easy to accept without reading – demonstrates the Internet’s ability to coerce the user into acceptance.

“Its OK, I have nothing to hide,” you might be thinking. And I do believe that, “It won’t hurt a thing,” is often added to such rationalizations. That sort of thinking is continuously exposed for what it is by the incessant announcements that gigantic globs of our personally identifiable information (PII) stored on corporate servers have been leaked to the bad guys through massive and mysterious spigots lurking in some company’s data center(s). The leaks signal the reminder that government mandated surveillance back doors in the data center (DC) and central office (CO) architectures help provide the weakened security upon which Internet crooks and criminals rely.

Thanks to the server back doors, criminals and marketers enjoy the same back door transparency without accountability as do government agents or anyone else that somehow has access through the back door. Truth be told, marketers have better back door access than government agencies in many cases. Unauthorized outsiders and criminals often rely upon masquerading as an administrator, marketer or agent at the back door.

So it is. Back doors of any stripe undermine security. Exploiting server back doors is a common objective of marketers, sellers, executives, governments, employees, hackers, crackers, spies, cheats, crooks and criminals alike. The attraction is that there is no way for you to tell who is standing at the back door or who has ever accessed your PII data at the server. While intrusion detection and logging practices have improved over time, it lags in uptake of state-of-the-art technologies. At the same time, the talents of intruders have not only kept pace with but often are defining the state-of-the-art.

Computing back-doors are not a new phenomenon. By now we could be raising our children to fear root kits as if by instinct. Root kits are just back doorknobs. Trojans? Worms? Other so-called malware – especially when the malware can somehow communicate with the outside world. It all goes out the back door. SQL Injection? Cross-site scripting? Man-in-the-middle attacks? Key-loggers? Just back doorways.

I need to take it one step further though. To a place where developers and administrators begin to get uncomfortable. Scripting languages (PowerShell, c-shell, CL, T-SQL, VBA javascript, and on and on and on) combined with elevated administrative authority? Back doors.

That’s right! Today’s central offices, data centers, and by extension cloud storage services – are severely and intentionally weakened at their very foundation by mandated back doors that have been tightly coupled to the infrastructure for dubious reasons of exploitation. That’s nuts!

Whats worse? We the people – as consumers and citizens – pay the costs to maintain the very electronic back doors that allow all comers to effortlessly rob us of our earnings, identities and privacy. What suckers!

And we provide the most generous financial rewards in society to the executives – and their politicians – that champion the continuation of senselessly high risk configurations that burp out our private information to all comers. That’s dumb.

~~~~~

So, how did we get here? It started way before the PATRIOT Act or September 11, 2001. The process has served to advantaged government and – in exchange for cooperation – business with little transparent deliberation and much politically bi-partisanship. Both corporate and political access without accountability to user personal information has been serviced at the switch for as long as there have been switches and at the server for as long as there have been servers.

To wit, Mssr. A. G. Bell, and Dr. Watson I presume, incorporated AT&T in 1885.

To implicate contemporary corporate data stewards, all one need do is look at the explosion in so called “business intelligence” spending to see user data in use in ways that does not serve the interests of the user. Most often by aiding someone else to make money.

Some act without reasonable ethical mooring. There is a driven interest by most corporations that can afford it to use all the data at their disposal in every way imaginable in the quest to lift the bottom line. Cheap has become a people harming virtue of capitalism. This intention to mine your data for behavioral advertising is one of the topics you could have read a few words about deep under that “I have read” button you magically uncovered and clicked instead when presented with all those pesky privacy policies.

The legislation and adjudication inopposition to government mandated communicationback doors in the US can be followed back to the bootleggers during Prohibition which lasted from 1920 until 1933. In 1928 the Taft Supreme Court decided (5-4) that obtaining evidence for the apprehension and prosecution of suspects by tapping a telephone is not a violation of a suspects rights under the 4th or 5th Amendments to the US Constitution.

The Communications Act of 1934 (Roosevelt) granted oversight of consumer privacy to the newly created Federal Communications Commission (FCC).

Beginning inthe 1960s, no concern was evident as television revelations began weekly broadcasts showing how Opie’s Pop, Sheriff Andy, could listen in on your phone calls or find out who you had talked with and what you had said in past phone conversations. All he had to do was ask Sarah at the phone company.

Alas, in 1967 the Warren Supreme Court overruled the 1928 decision (7-1) and said the 4th Amendment does in fact entitle the individual to a “reasonable expectation of privacy.” This was widely thought to mean government agents had to obtain a search warrant before listening in on a phone conversation. However, the erosion of privacy at the confluence of surveillance and profit has since become a muddy delta.

Privacy protection during “any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photo-electronic or photo-optical system that affects interstate or foreign commerce” were revoked in the US – in a bi-partisan fashion – as the Electronic Communications Privacy Act (ECPA) of 1986 (Reagan). ECPA effectively expanded the reach of the Foreign Intelligence Surveillance Act (FISA) of 1978 (Carter) to include US Citizens: heretofore protected by the Bill of Rights from being spied upon by the US government.

No one I know had an email address in 1986. So no one cared that ECPA stripped American citizens of their email privacy. No one I know does not have an email address in 2013. Still, few seem alarmed that there has been no electronic privacy in the US since 1986. Judging by the popularity of the Internet-as-it-is and in the light of the unrelenting and truly awful stories of hacking, identity theft and stalking coming to the fore every day, perhaps nobody even cares?

With the Communications Assistance for Law Enforcement Act (CALEA) of 1994 (Clinton), the full burden of the costs to provision and maintain an expanded ECPA surveillance capability was thrust upon the service provider. I leave it to you todecide how service providers funded the levee (hint: profits are up). Providers were now required to build data centers and central offices with a guaranteed and user friendly listening ability for ECPA surveillance agents: the free swinging back door became a government mandate.

The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT ACT) of 2001 (Bush 2) removed any need to notify an individual that they had been under surveillance until and unless authorities arrest and charge that individual. The burden of electronic privacy was placed squarely on the individual. Privacy officially died.

Even now agencies play games with the USA PATRIOT ACT by not charging US non-citizens and holding them in indefinite detention, perhaps in part to avoid having to disclose surveillance methods should the matter come to trial? I have no way to know what they are doing, but the pattern of escalating surveillance permissiveness in legislation over time suggest that it is only a matter of time before the ability to hold citizens without charge becomes an effective sledge hammer methodology for agencies and eventually the police. History is clear that such detainment will be used and will be used inappropriately.

Still the politicians remain unsatisfied? In 2008, FISA was amended to effectively eliminate the distinction in agency surveillance of an enemy combatant and a citizen. Now everyone is ‘the enemy’ through the FISA visor. FISA Amendment changes continue to ripple through ECPA, CALEA and US PATRIOT ACT regulations in an expansion of authority to a force already claimed by its leadership to be stretched to thin to be able to keep track of what they are doing accompanied by a decrease in already inadequate and faltering judicial oversight.

In 2006 and then again in 2011 the US PATRIOT ACT regulations that were supposed to expire because they would make the country safe enough in a limited time to not be needed in the future were extended… and re-extended.

Recently the NSA claimed it would violate our privacy if they secretly told two US Senators approximately how many citizens they had electronically spied on. Why is that not disturbing to most? It is worth noting that the Generals of the NSA – yes, the US Military call the shots on the privacy for all American Citizens – made it clear at that time that perhaps no-one has a way to tell who has and has not been electronically spied upon – as an alternative way to explain why they could not answer the Senators’ question.

It might be OK if privacy had been fairly traded for security, but that has not happened. Instead, the government has given our privacy to these unaccountable agencies and the terrorism continues. The police and other agencies are arriving in time to clean up the mess, spending shit loads of the public’s money putting on a good show for the cameras, and spinning the truth about how much these laws are helping. They may be getting better at stopping the second terror attack of a particular stripe, but that is only valuable to society when the bad guys repeat a type of attack. So far, that is not happening. The agencies are being pwned big time and don’t even notice because they are busy reading your email.

The 4th Amendment is null and void – unless you have a gun. The 9th Amendment is now about exceptions and exclusion to rights instead of the protection of rights not named elsewhere in the Bill of Rights as the unchanged text of the amendment would suggest. If I understand correctly, even the 1st Amendment has been struck. I’m not a constitutional expert, but I am 100% positive privacy is out the window like a baby toy. And now we are too far down that road to even think about going back to find it.

Our government is now self-empowered to spy on the people, self-evidently convinced it must spy on the people and self-authorized to exterminate its own citizens without the process of law we are told is due every citizen. This is territory most inconsistent with the Constitution of the United States as I understand it and wholly unacceptable to the vast majority of the citizenry with knowledge of the matter as far as I can tell. Indeed, what people on earth should tolerate such governance?

Update August 22, 2015. The USA FREEDOM Act of 2015 (Obama) stirs the muddy waters of privacy but in the end is little more than a re-branding effort that hopes to squelch the post-Snowden outcry against mass surveillance.

 

~~~~~   

So, what can be done? Here are some guiding principles for anyone seeking to take back their online privacy.

  1. There is no plan to delete anything. Never write, type, post or say anything [on-line] you do not want others to see, read, overhear or attribute to you. Anything you put on the Internet just may be out there forever. IBM has boasted the ability to store a bit of data in 12 atoms. Quantum data storage is just around corner. Search technology is making bigger strides than storage technology.
  2. You cannot take anything back. Accept that all the information you may have placed online at any time – and all so called ‘pen registers’ that document your interactions during placement – does not belong to you. Sadly, you may never know the extent of compromise this not-yours-but-about-you data until it is too late to matter. The single most important action you can take to safeguard what little is left of your privacy – from this moment forward – is to use only peer reviewed Open Source privacy enabled software when connected to the Internet.
  3. Stop using social web sites. There are many ways to keep track of other peoples’ birthdays. There is not much worth saying that can be properly said in one sentence or phrase and understood by boodles of others. Makes for good circus and gives people something to do when there is nothing appealing on TV but not good for communication or privacy. But combine the keywords from your clucks, demographics from your birthday reminder list and your browsing history and it is far more likely that you can effectively advertised into a purchase you had not planned or researched the way you likely claim you always do. Such behavior inducing advertising, in essence cheapens life and it makes a few people a lot of money.
  4. Avoid web sites that know who you are. Search engines and portals, like all free-to-use web sites, get their money by looking through the back door and keeping the history. Maybe forever? This data is not generally encrypted, nor even considered your data. Nonetheless, anyone that can hack into this not-your-data has the information needed to recreate your search history and, in all likely hood, to identify you if so desired. Corporate data aggregations and archives, so-called data warehouses – often leave related data available to business analysts, developers, network engineers and any other sneaks who might impersonate them in a nicely prepared user interface that can drill-down from the highest aggregations (e.g. annual sales or total population) to the actions and details of an individual in a few clicks. Once ordered, organized, indexed and massaged this data remains available in perpetuity.  Protect your browsing history and searches from as much analysis as possible – a favorite pen register classed surveillance freebie for governments (foreign & domestic), marketers, and criminals alike. One slightly brutal way might be to surf only from freely accessible public terminals and never sign-in to an online account while surfing from that terminal. An easier and open source but still more work than not caring way may be to hit Tor’s onion servers using FireFox and Orbot from your Android device or the Tor browser bundle from your Linux desktop or thumbdrive. (We have no way to know if the Windows or Mac desktop are backdoored.). You could even combine the two approaches with tails.
  5. Use open source software you trust Avoid any computer use while logged in with administrator or root authority. Particularly avoid connections to the Internet while logged in as the administrator or root. Avoid software that requires a rooted smartphone or a local administrator login during use.
  6. adopt peer-to-peer public key cryptography (p2p) 
    1. securely and safely exchange public keys In order to have confidence in the integrity of the privacy envelope of your communications and exchanges with others.
    2. exchange only p2p encrypted emails Never store your messages, even if encrypted by you, on a mail server else you forgo your right to privacy by default. I think US law actually says something like when your email is stored on somebody else’s mail server it belongs to that somebody else not to you. Even Outlook would be better, but Thunderbird with the Enigma OpenPGP add-in is a proven option for PGP encryption using any POP account. The hard part will be re-learning to take responsibility for your own email after becoming accustomed to unlimited public storage (and unfettered back door access). It will also become your responsibility to educate your friends and family about the risks to convince them to use peer-to-peer public key cryptography and secure behaviors too. Until then your private communications to those people will continue to leak out no matter what you do to protect yourself.
    3. exchange only p2p encrypted messages For SMS text, investigate TextSecure from Open WisperSystems. I don’t have a suggestion for SMS on the desktop.  For other messaging check out GibberBot that connects you through the Tor network on your Android device. If used by all parties to the chat, this approach will obfuscate some of your pen registers at the DC and all of your message text. Installing Jitsi adds peer-to-peer cryptography to most popular desktop Instant Messaging clients. Jitsi does not close any back doors or other vulnerabilities in IM software. Your pen registers will still be available at the server and attributable to you but your private information will only be exposed as encrypted jibberish. Using the onion servers with Jitsi or GibberBot will help obfuscate your machine specific metadata but the IM server will still know it is your account sending the message. Security experts seem convinced that Apples loudly advertises the iMessage back-door: http://blog.cryptographyengineering.com/2012/08/dear-apple-please-set-imessage-free.html
    4. exchange p2p encrypted files If you get A. right, this will be a breeze.
    5. exchange p2p encrypted SMS messages else avoid SMS.  I had briefly used TextSecure from Open WisperSystems on Android 4x. I don’t have a secure tested Windows or Linux desktop suggestion for SMS.
    6. exchange p2p encrypted voice communications While web phone Session Initiation Protocol (SIP) providers are subject to the same pen and tap logging rules as all other phone technologies. The biggest practical differences between SIP and good old System 7 or cellular switching is the open source software availability and throughput potential. With SIP several open source apps are available now built upon Zimmerman’s Realtime Transport Protocol (ZRPT) for peer-to-peer encryption of SIP-to-SIP multimedia capable conversations. I know Jitsi includes ZRPT by default for all SIP accounts registered. When a call is connected the call is encrypted, but ONLY if the other party to the call is also using a ZRPT peer. tforo
  7. avoid trackers, web bugs and beacon cookies Cookies are tiny files. They are an invaluable enhancement for user experience b  that don’t get wiped when you leave the page. Cookies have become impossible to manage manually because there are so many and because the cookie bakers try to make it difficult for you to determine the ingredients of cookies on your machine filled with your data. That is so creepy. What could go wrong? But trackers are the worst. They keep collecting your data even after you leave the baker’s web site and disconnect from the Internet. Then every chance they get these cookies will gather more data from you and ever so slyly push your data out to some slimy crapitalist death star . Lots of trackers come in or as advertisements. As many or more are “web bugs” that may even masquerade as embedded structural elements of the page – and so almost certainly an intentional component of the page. The classic beacon cookie is an image file with no image, just a tracked data collector, yet done in a way that convinces most (all?) tracker detectors of it’s innocence. However, it would be foolish to characterize trackers as ever built one way or another. The design goal is and will always be to not look like a tracker. In today’s world, I believe it safe say that the tracker builders continue to have an easy time of it to keep a long step ahead of the tracker trackers. I have AdBlock Plus, Ghostery and the EFF’s Privacy Badger running on the browser I am using to edit this old page. AdBlock Plus finds 4 adds but blocking has to be off or I am not able to edit the blog post. Ghostery finds 3 web bugs and Privacy Badger identifies 7 trackers or for this page. Google appears to have a tracker that Privacy Badger does not disable by default. Could be I made the wrong choice on some google policy 11 years ago, could be Google imposes this unblocked or could be it is relatively benign though I mostly doubt that scenario is plausible.

As you can see, effective tools are available to protect on-line privacy. What’s more, the bad guys are already using these tools! The challenge for us is a human behavioral issue that demands little more than awareness and a willingness to cooperate. Could be that cooperation is the overpowering impediment in these polarized times. Only if everyone in the communication values privacy and respects one another enough to move together to a peer-to-peer public key cryptography model using widely accepted open source software can that software hope to assure privacy.

We must start somewhere. I repeat, the bad guys made the changes long ago so your resistance serves only your demise and the ability of others to profit from your data until such a time.

Sadly, I’m not at all sure how to convince any of you to not flock toward the loudest bell. You’ld be throwing your your privacy to the wolves yet again. With each catastrophe perpetrated by the very bad guys that the rape of our privacy was supposed to protect, the media immediately and loudly lauds the police and other agencies for doing such a great job and proclaims how lucky we are to have them freely spying upon our most personal matters. The agencies, for their part, continue to bungle real case after real case yet maintain crafty spokespeople to pluck a verbal victory from the hind flanks of each shameful defeat for privacy. Turns out the agencies don’t even use the pen registers and tap access for surveillance they claim to be crucial. Instead it is a helper when sweeping up the mess left behind by the last bad guy that duped them. Why are the agencies not preventing the horrible events as was falsely promised during the effort to legitimize their attacks on our personal privacy?

For genuine privacy, all people in the conversation must confidently and competently employ compatible p2p cryptography. That’s all there is to it. Until folks once again discover the fundamental value in private communications and public official transparency, public accountability is beyond reach.. and your privacy and my privacy will remain dangerously vulnerable.

Posted in Code Review, Privacy, Secure Data | Leave a comment

It’s [Still!] the SQL Injection… Stupid

Did you see Imperva’s October 2012 Hacker Intelligence Report? The report is a data mining study directed toward the on-line forum behaviors among a purportedly representative group of hackers. The milestone for October 2012 is that Imperva now has a year’s worth of data upon which to report. Almost a half a million threads in the mine. In this study, keyword analysis found SQL injection sharing the top of the heap with DDOS in terms of what hackers talk about in the forums. In the wake of Powershell 3.0 – the study also identifies shell code as the big up-and-comer for mentions in the hacker threads. Only 11% of the forum posts mention “brute force”. “Brute force” being the only topical category Imperva charted with a direct relationship to cryptography.

The absence of an aggregate specifically dedicated to cryptography or encryption strongly suggest the hackers are not talking much about cryptography. Hmmmm.

Keyword frequency in 439,587 forum threads:

  1. SQL injection 19%
  2. dDos 19%
  3. shell code 15%
  4. spam 14%
  5. XXS 12%
  6. brute force 11%
  7. HTML injection 9%

The report also cites disturbing data from a 2012 Gartner study, “Worldwide Spending on Security by Technology Segment, Country and Region, 2010-2016”. The Gartner study purportedly finds that less than 5% of money spent on computing security products buys products that are useful against SQL injection.

Holy octopus slippers! The statistics definitely make a good case for taking a look at Imperva’s products. Even if there is some marketing bias in the study – I don’t think so, but saying even if there is – the findings are more bad news for data security. Whats worse is we seem to be headed in the wrong direction.  Consider:

  • The SQL Server 2008 Books Online had a page of SQL injection prevention best practices that is removed from the SQL Server 2012 edition.
  • SQL injection prevention guidance is largely unchanged for many years yet is not widely followed. Input validation is the key. Judging by the hacker interest in SQL injection, adoption of the guidance must be low.
  • Hackmageddon.com’s Cyber Attack Statistics indicate that SQL injection is found in over 25% of the hacker attacks documented during October 2012.
  • Cloud host FireHost – a WMWare base web host with an impressive security claim and a data centers in Phoenix, Dallas, London and Amsterdam –  reported that attacks the Firehost infrastructure had detected and defend saw a 69% spike in SQL injection attacks in second quarter 2012 ( and then cross site scripting surged in just ended Q3).

What is stopping us from truly going after this problem? Input validation is not hard and need not be a bottleneck. Threat detection is supposedly a part of any ACE/PCI/FIPS/HIPPA  compliant system. Detection avoidance is well understood. Nonetheless, and for reason$ far beyond reasonable, the strategic emphasis remains stuck on compliance. That would be OK if the standards of compliance were even close to adequate. Clearly they are not. That proof is in the pudding.

There are SQL injection detection and avoidance products out there that work. Many many products. Just to name a few – and by way of describing the types of tools that forward thinking organizations are already buying in an effort to eradicate SQL injection:

Application Delivery Network (ADN)

  • Citrix NetScaler
  • F5s BigIP
  • Fortinet  FortiGate

Web Application Firewall (WAF)

  • Applicature dotDefender WAF
  • Cisco ACE Web Application Firwewall
  • Imperva Web Application Firewall & Cloud WAF
  • Barracuda Networks Web Application Firewall
  • armorize’s SmartWAF (Web server host based)

Web Vulnerability Scanners (WVS)

  • sqlmap (free)
  • Acunetix WVS

Unified Threat Management (UTM)

  • Checkpoint UTM-1 Total Security Appliance
  • Saphos UTM Web Server Protection
  • Watchguard XTM

These products and the overarching practice changes needed to implement them show success in going after the problem. But, as the Gartner study shows, nobody seems to be buying it.

There are also cloudy platform hosts and ISPs like FireHost that handle the WAF for organizations that cannot justify the capital costs and FTEs required to do the job right in-house due to scale.

Ever so slowly the major hosts are imposing WAF upon all tenants. Another n years at the current snail’s pace and the security improvement just might be noticeable. Seems to me like “do it and and do it now” is the only choice that can reverse a ridiculous situation that has gone on too long already. Even secure hosts prioritize profitability over basic security. That is rent seeking.

Any web presence that takes another tact is telegraphing priorities that violate the duty to protect that which is borrowed under an explicit promise of confidentiality or generally accepted fiduciary performance levels equivalent to all other financial and intellectual assets of that organization. Few meet the sniff test. Many remain highly profitable. Just take a look in your Facebook mirror. Customer’s and consumers have no visibility into how any organization will honor this responsibility nor recourse when that duty is shirked. The metrics above make clear that poor security practices shirk the responsibility and carelessly feed the identity theft racket. It must end. Organizations that perpetuate this status quo and remain profitable are complicit in an ongoing crime of  against humanity. IT staff who are fairly characterized as “team players” or leaders in such organizations are every bit as culpable as the soldiers of Auschwitcz or My Lai or the owner of a piano with Ivory keys.

Organizations private and public have a fundamental obligation to protect customers, clients and citizens from illegal exploitation. What in the world makes it OK to exclude chronic identity theft violations from that responsibility?

Even when the data center budget includes one of the more robust solutions; to have done the needful in terms of basic input validation, code/user authentication and principal-of-least-privilege access rights is essential for any good defense-in-depth security strategy.

Consider this T-SQL function from the Encryption Hierarchy Administration schema that implements a passphrase hardness test based upon the SQL injection prevention guidance in the SQL 2008 BOL.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: password/passphrase gauntlet
4   --    phrases are frequently used in dynamic SQL so SQL Injection is risk
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckPhrase 
7     ( @tvp AS NAMEVALUETYPE READONLY )
8   RETURNS @metatvp TABLE 
9     ( Status NVARCHAR (36)
10    , Signature VARBINARY (128) )
11  $(WITH_OPTIONS)
12  AS
13  BEGIN
14    DECLARE @Status NVARCHAR (36)
15          , @Name NVARCHAR(448)
16          , @UpValue NVARCHAR (128) 
17          , @Value NVARCHAR (128) ;
18    -- dft password policy as described in 2008R2 BOL + SQL Injection black list
19    -- fyi: SELECT CAST(NEWID() AS VARCHAR(128)) returns a valid password 
20    SET @Status = 'authenticity';
21    IF EXISTS ( SELECT *
22                FROM sys.certificates c
23                JOIN sys.crypt_properties cp
24                ON c.thumbprint = cp.thumbprint
25                CROSS JOIN sys.database_role_members r
26                WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
27                AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
28                AND c.name = '$(OBJECT_CERTIFICATE)'
29              AND c.pvt_key_encryption_type = 'PW'
30                AND cp.major_id = @@PROCID 
31                AND @@NESTLEVEL > 1 -- no direct exec of function 
32                AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
33                AND EXISTS ( SELECT * FROM sys.database_role_members 
34                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
35                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
36                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) )        
37      BEGIN
38        SET @Status = 'decode';
39        SET @Name = ( SELECT DECRYPTBYKEY( Name 
40                                         , 1
41                                         , CAST( KEY_GUID('$(SESSION_SYMMETRIC_KEY)') AS NVARCHAR (36) ) ) 
42        FROM @tvp );
43        SET @Value = ( SELECT DECRYPTBYKEY( Value, 1, @Name ) FROM @tvp );                    
44        IF PATINDEX('%.CONFIG', UPPER(@Name) )  -- no strength test, will fall through 
45         + PATINDEX('%.IDENTITY', UPPER(@Name) )             
46         + PATINDEX('%.PRIVATE', UPPER(@Name) ) 
47         + PATINDEX('%.SALT', UPPER(@Name) )           
48         + PATINDEX('%.SOURCE', UPPER(@Name) ) > 0       
49          SET @Status = 'OK';
50        ELSE
51          BEGIN
52            SET @UpValue = UPPER(@Value);
53            SET @Status = 'strength';
54            IF ( (    ( LEN(@Value) >= $(MIN_PHRASE_LENGTH) )   -- more is better
55                  AND ( PATINDEX('%[#,.;:]%'
56                      , @Value ) = 0 )   -- none of these symbols as recommended in BOL 
57                  AND ( SELECT CASE WHEN PATINDEX('%[A-Z]%'
58                                                  , @Value) > 0 
59                                    THEN 1 ELSE 0 END    -- has uppercase
60                              + CASE WHEN PATINDEX('%[a-z]%'
61                                                  , @Value) > 0 
62                                    THEN 1 ELSE 0 END    -- has lowercase  
63                              + CASE WHEN PATINDEX('%[0-9]%'
64                                                  , @Value) > 0 
65                                  THEN 1 ELSE 0 END    -- has number
66                              + CASE WHEN PATINDEX('%^[A-Z], ^[a-z], ^[0-9]%' -- has special
67                                                  , REPLACE( @Value,SPACE(1),'' ) ) 
68                       ) > 0  
69                                    THEN 1 ELSE 0 END ) > 2 ) )   -- at least 3 of 4
70              BEGIN 
71                -- black list is not so strong but can look for the obvious 
72                SET @Status = 'injection';                       
73                IF ( PATINDEX('%[__"'']%', @UpValue)   -- underscore (so no sp_ or xp_) or quotes
74                   + PATINDEX('%DROP%'   , @UpValue)   -- multi-character commands... 
75                   + PATINDEX('%ADD%'    , @UpValue)
76                   + PATINDEX('%CREATE%' , @UpValue)
77                   + PATINDEX('%SELECT%' , @UpValue)
78                   + PATINDEX('%INSERT%' , @UpValue)
79                   + PATINDEX('%UPDATE%' , @UpValue)
80                   + PATINDEX('%DELETE%' , @UpValue)
81                   + PATINDEX('%GRANT%'  , @UpValue)
82                   + PATINDEX('%REVOKE%' , @UpValue)
83                   + PATINDEX('%RUNAS%'  , @UpValue)
84                   + PATINDEX('%ALTER%'  , @UpValue)
85                   + PATINDEX('%EXEC%'   , @UpValue)
86                   + PATINDEX('%--%'     , @Value)     -- comments...
87                   + PATINDEX('%**/%'    , @Value) 
88                   + PATINDEX('%/**%'    , @Value)  = 0 )
89                  BEGIN 
90                    SET @Status = 'duplicate';
91                    IF NOT EXISTS ( SELECT *                  -- not already used  
92                                    FROM $(EHA_SCHEMA).$(NAMEVALUES_TABLE) n
93                                    WHERE ValueBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
94                                                                              , '$(EHA_SCHEMA)'
95                                                                              , '$(NAMEVALUES_TABLE)'
96                                                                              , 'ValueBucket' 
97                                                                              , @Value)
98                                    AND CAST(DecryptByKey( n.Value -- should be rare
99                                                          , 1
100                                                         , @Name ) AS NVARCHAR (128) )  =  @Value )  
101                     SET @Status = 'OK';
102                 END
103             END
104          END
105     END
106   INSERT @metatvp
107     ( Status
108     , Signature ) 
109   VALUES 
110     ( @Status
111    , SignByCert( CERT_ID('$(AUTHENTICITY_CERTIFICATE)'), @Status ) );
112   RETURN;
113 END
114 GO
115 ADD SIGNATURE TO $(EHA_SCHEMA).CheckPhrase 
116 BY CERTIFICATE $(OBJECT_CERTIFICATE)
117 WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
118 GO

SQL injection input validation is only part of what goes on here. The function accepts an already encrypted name value pair TVP as a parameter and returns a signed business rule validation result as a TVP.  To do so, first the schema and user authenticity are verified before the phrase is decoded and the SQL injection/detection rules are applied. Only if all rules are met will an IO be required to verify that the phrase has not already been used.

The bi-directional encoding of parameters with a private session scoped symmetric key helps to narrow the SQL injection threat vector even before the filter(s) can be applied. This means that the passed values have already successfully been used in a T-SQL ENCRYPTBYKEY command in the current database session. Not that encryption does anything to prevent or detect SQL injection. It is more that the first touch of any user input value carries higher risk. Likewise the first use of an input in any dynamic SQL  statement carries a higher risk. Always better to do something benign with user input before you risk rubbing it against your data.

In the process of validation, two black lists are used to filter punctuation (line 55) and specific character sequences (lines 73-88) frequently identified as injection markers.

Another function from the schema validates names for T-SQL Encryption Hierarchy key export files. In this function the black list filter that includes file system specific markers as identified in the same SQL Server 2008 R2 books Online article. The somewhat cumbersome PATINDEX() driven exclusion filter pattern is used in the file name function as is used for hardness testing.

1   -------------------------------------------------------------------------------
2   --    bwunder at yahoo dot com
3   --    Desc: apply file naming rules and conventions
4   --    name not already in use and no identified sql injection
5   -------------------------------------------------------------------------------
6   CREATE FUNCTION $(EHA_SCHEMA).CheckFile 
7     ( @Name VARBINARY (8000) )
8   RETURNS BIT
9   $(WITH_OPTIONS)
10  AS
11  BEGIN
12    RETURN (SELECT CASE WHEN  PATINDEX('%[#,.;:"'']%', Name) 
13                            + PATINDEX('%--%', Name)
14                            + PATINDEX('%*/%', Name)
15                            + PATINDEX('%/*%', Name)
16                            + PATINDEX('%DROP%', Name)
17                            + PATINDEX('%CREATE%', Name)
18                            + PATINDEX('%SELECT%', Name)
19                            + PATINDEX('%INSERT%', Name)
20                            + PATINDEX('%UPDATE%', Name)
21                            + PATINDEX('%DELETE%', Name)
22                            + PATINDEX('%GRANT%', Name)
23                            + PATINDEX('%ALTER%', Name) 
24                            + PATINDEX('%AUX%', Name) 
25                            + PATINDEX('%CLOCK$%', Name) 
26                            + PATINDEX('%COM[1-8]%', Name)
27                            + PATINDEX('%CON%', Name) 
28                            + PATINDEX('%LPT[1-8]%', Name) 
29                            + PATINDEX('%NUL%', Name) 
30                            + PATINDEX('%PRN%', Name) = 0
31                        AND NOT EXISTS 
32                ( SELECT COUNT(*) AS [Existing] 
33                  FROM $(EHA_SCHEMA).$(BACKUP_ACTIVITY_TABLE)
34                  WHERE BackupNameBucket = $(EHA_SCHEMA).AddSalt( '$(SPOKE_DATABASE)'
35                                                                , '$(EHA_SCHEMA)'
36                                                                , '$(BACKUP_ACTIVITY_TABLE)'
37                                                                , 'BackupNameBucket' 
38                                                                , Name ) )    
39                        THEN 1 ELSE 0 END
40            FROM (SELECT CAST( DECRYPTBYKEY ( @Name ) AS NVARCHAR(448) ) AS Name  
42                  FROM sys.certificates c
42                  JOIN sys.crypt_properties cp
43                  ON c.thumbprint = cp.thumbprint
44                  CROSS JOIN sys.database_role_members r
45                  WHERE r.role_principal_id = DATABASE_PRINCIPAL_ID ( '$(SPOKE_ADMIN_ROLE)' ) 
46                  AND r.member_principal_id = DATABASE_PRINCIPAL_ID ( ORIGINAL_LOGIN() )  
47                  AND c.name = '$(OBJECT_CERTIFICATE)'
48                  AND c.pvt_key_encryption_type = 'PW'
49                  AND cp.major_id = @@PROCID 
50                  AND @@NESTLEVEL > 1 
51                  AND IS_OBJECTSIGNED('OBJECT', @@PROCID, 'CERTIFICATE', c.thumbprint) = 1
52                  AND EXISTS (SELECT * FROM sys.database_role_members 
53                              WHERE [role_principal_id] = USER_ID('$(SPOKE_ADMIN_ROLE)')
54                              AND USER_NAME ([member_principal_id]) = SYSTEM_USER 
55                              AND SYSTEM_USER = ORIGINAL_LOGIN() ) ) AS derived );
56  END
57  GO
58  ADD SIGNATURE TO $(EHA_SCHEMA).CheckFile 
59  BY CERTIFICATE $(OBJECT_CERTIFICATE)
60  WITH PASSWORD = '$(OBJECT_CERTIFICATE_ENCRYPTION_PHRASE)';
61  GO

When processing happens one string at a time this filter is of small performance concern. However, processing a large set against such a filter could be slow and disruptive. The objects are obfuscated into the database WITH ENCRYPTION so only those with elevated access, development environment access and source repository access are likely to be aware of the filter details. Most of the logic in the functions is to verify the authority and authenticity of the caller and the calling object.

These functions demonstrate that fundamental SQL injection protection is easily achievable even for an application with the crippled regular expression support of T-SQL. If performance or load is a service level issue, the CLR might be a better host for the functions. However, as with obfuscation, the most important place to validate against SQL injection is at the point where data enters the system. In some cases SQL injection protection done after the invalidated text moves inside SQL Server will be too late. Only when the user interface is a SQL command line could it be the best security choice to validate against SQL injection inside SQL Server. In scope, SQL injection prevention is an application layer exercise. That being said, skipping the SQL injection validation inside SQL Server is reckless. Robust security will employ layers of defense.

In my mind, the command line too must always be considered as an attack vector even if not a favorite SQL injection attack vector at this time and even if the application makes no use of the command line. As the metamorphosis of now soluble security landscape unfolds, the hackers will be shining their cyber-flashlights everywhere and chasing anything shiny. To discount that the command line will continue to get a good share of malicious attention as long as there are command lines and hackers is negligence and/or denial.

For the Encryption Hierarchy Administration schema that uses the functions above a Powershell deployment and administration interface is helpful to improve security. With a pure T-SQL approach there is always a risk of exposure of user input text of secrets in memory buffers before the input value can be ciphered by the database engine. Granted, it is a brief period of time and I am not even sure how one would go about sniffing SQLCMD command line input without running in a debugger or waiting for the input to move into database engine workspace. It surely must be available somewhere in memory. The scenario is a target. I know I have never check if the operating system is running a debugger in any T-SQL script I have ever written. This helps to illustrate why the best time and place to encrypt data is at the time it is generated in the place where it is generated or enters the system. Even then, 100% certainty will remain elusive if the system cannot be verified to be keylogger free.

The utility models a somewhat unusual scenario where encryption at the database is the right choice. Nonetheless, getting the many secrets required for the administration of encryption keys and key backups entered into the system presents a potential for exposure to memory mining hackers. Using SecureString input and SecureString based SMO methods to instantiate the database objects that need the secrets can eliminate much of that vulnerability. As you may know a SecureString is an .NET object .encrypted by a hash from the current user’s session at all times while in memory with user cleanup from memory that can be more secure than garbage collection. It is relatively easy for the user to decrypt the SecureString data on demand but doing so would result in sensitive information becoming available as clear text in memory registers where the un-encrypted copy is written. No other users have access to the encryption key.

  
function Decode-SecureString 
{   
    [CmdletBinding( PositionalBinding=$true )]
    [OutputType( [String] )]
    param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )]
            [System.Security.SecureString] $secureString )  
    begin 
    { $marshal = [System.Runtime.InteropServices.Marshal] }
    process 
    { $BSTR = $marshal::SecureStringToBSTR($secureString )
     $marshal::PtrToStringAuto($BSTR) } 
    end
    { $marshal::ZeroFreeBSTR($BSTR) }
}

Decode-EHSecureString $( ConvertTo-SecureString '1Qa@wSdE3$rFgT'  -AsPlainText -Force )

Powershell obfuscates Read-Host user input of type SecureString with the asterix (*) on the input screen. With the ISE you get a WPF input dialog that more clearly show the prompt but could also become annoying for command-line purists.

To evaluate the use of a Powershell installer, I coded a Powershell installation wrapper for the hub database installation scripts of the utility. The hub database needs 4 secrets: the passwords for four contained database users.  With this change it makes no sense to add an extra trip to the database to evaluate the hardness and SQL injection vulnerability for each secret. Instead, the SQL injection input validation logic from the T-SQL functions above it migrated to a Powershell Advanced Function – Advanced meaning that the function acts like a CmdLet – that accepts a SecureString.

  
1  function Test-EHSecureString 
2  {   
3   [CmdletBinding( PositionalBinding=$true )]
4      [OutputType( [Boolean] )]
5      param ( [Parameter( Mandatory=$true, ValueFromPipeline=$true )] 
6              [System.Security.SecureString] $secureString
7            , [Int32] $minLength = 14 
8            , [Int32] $minScore = 3 )  
9      begin 
10     { 
11         $marshal = [System.Runtime.InteropServices.Marshal] 
12     }
13     process 
14     {   # need the var to zero & free unencrypted copy of secret
15         [Int16] $score = 0
16         $BSTR = $marshal::SecureStringToBSTR($secureString)
17         if ( $marshal::PtrToStringAuto($BSTR).length -ge $minLength )
18         { 
19             switch -Regex ( $( $marshal::PtrToStringAuto($BSTR) ) )
20            {
21             '[#,.;:\\]+?' { Write-Warning ( 'character: {0}' -f $Matches[0] ); Break }
22             '(DROP|ADD|CREATE|SELECT|INSERT|UPDATE|DELETE|GRANT|REVOKE|RUNAS|ALTER)+?' 
23                           { Write-Warning ( 'SQL command: {0}' -f $Matches[0] ); Break }
24             '(AUX|CLOCK|COM[1-8]|CON|LPT[1-8]|NUL|PRN)+?' 
25                           { Write-Warning ( 'dos command: {0}' -f $Matches[0] ); Break } 
26             '(--|\*\/|\/\*)+?'{ Write-Warning ( 'comment: {0}' -f $Matches[0] ); Break }
27             '(?-i)[a-z]'  { $score+=1 }
28             '(?-i)[A-Z]'  { $score+=1 }
29             '\d+?'        { $score+=1 }
30             '\S\W+?'      { $score+=1 }
31             Default { Write-Warning $switch.current; Break }        
32            } 
33         }
34         else
35         { write-warning 
36                      ( 'length: {0}' -f $( $marshal::PtrToStringAuto($BSTR).length ) ) } 
37         write-warning ( 'score: {0}' -f $score )  
38         $( $score -ge $minScore )
39     }        
40     end { $marshal::ZeroFreeBSTR($BSTR) }
41 }

One thing for sure. Much less code required in Powershell than T-SQL to create a user. To securely invoke the function a Read-Host -AsSecureString prompts for user input that will go into the memory location allocated to the SecureString and only as encrypted data. Here the Powershell script will prompt for input until it gets an acceptable value. Remember, no one else will be able to decode this value in memory. Only the user that created the SecureString. Defense-in-Depth demands that care is be taken that the SecureString memory location is not taken for an off-line brute force interrogation.

 

do { $SQL_PASSWORD = $(Read-Host 'SQL_PASSWORD?' -AsSecureString ) } 
until ( $(Test-SecureStringHardness $HUB_ADMIN_PASSWORD ) )

Then the database is affected by the entered secret using SMO. In this case a user with password will be created in a contained database.

  
if ( $( Get-ChildItem -Name) -notcontains 'HubAdmin' )
{
    $HubAdmin = New-Object Microsoft.SqlServer.Management.Smo.User
    $HubAdmin.Parent = $smoHubDB
    $HubAdmin.Name = 'HubAdmin'     
    $HubAdmin.Create( $HUB_ADMIN_PASSWORD ) 
}

The Test-SecureString function does expose the clear text of the secret in the PSDebug trace stream during the switch operation as $switch. To my knowledge there is no way to obfuscate the value. On top of that the disposal of the $switch automatic variable is under garbage collection so there is no reliable way to know for sure when you can stop wondering is anyone found it. That uncertainty may be more of a risk than the SQLCMD exposure that the secure string is supposed to solve? On the other hand, the risk of exposure of the SQLCMD setvar values is undetermined so it would be silly to pretend to quantify an unknown risk. What I know for sure is those values have to touch primary storage buffers in order to load the variables and then populate the SQLCMD – $(VARIABLE_NAME) – tokens in the script. At least with the Powershell script I can quantify the risk and take all necessary precaution to mitigate. With SQLCMD setvar variables about all I can do to be certain my secrets are not fished out of the free pool or where ever else they might be buffered as clear text is remove the power. Even that is no guarantee the secrets are not leaked to a paging file, spooler or log while exposed internally or externally as clear text as the system shuts down.

At this point I’m convinced that it is best to validate Powershell SecureString input against SQL injection threats. The risk that someone will find the secrets in local memory and use them with malicious intent is far less than the risk from SQL injection in my estimation. I will continue to integrate Powershell into the install script with the goal of  using a Powershell installer. This is a much better input scenario than the T-SQL event obfuscation technique I had been using.

I will leave the SQL injection filters in the T-SQL functions to complement the Powershell filter for two reasons. Defense-in-depth and [sic]Defense-in-depth. 

Posted in Code Review, Data Loading, Secure Data, Testing | Leave a comment