Console Tools for the SQL Server for Linux CTP2.1 Docker Image

In my ‘lab’ I have my last three laptops and a Beaglebone Black. The laptops all came with a retail Windows OS (10-8-7). All have long been running Linux (openSUSE, linuxMint and CentOS in the moment, but after 18 months I’m ready to admit openSUSE is not among my preference for desktop Linux purposes – it’s just such a pain to .). The Beaglebone is on Debian Jesse for ARM. Not a Windows box to be found other than a lingering dual-boot that hasn’t been accessed – or updated! – in more than a year now on the most recent machine.

I was easily tempted when the SQL Server for Linux hit the Internet. But playing around with the earlier CTPs without a Windows box was not very exciting. All I had was sqlcmd and bcp. Eventually I scrounged up a few more tools.

This post talks about some of the command line tools I found usable. If you have SQL Server for Linux in your future, this may be interesting and useful to you. It occurs to me that being able to query and configure a SQL Server from it’s console is a fundamental and essential requirement: just in case you cannot connect from anywhere else and your data is valuable to you.

/~ bash

The out of the box console query tools for SQL Server on Linux is the empty set. The server-side components of something M$ is calling ODBC are there, but the client side ODBC mssql-tools package that provides sqlcmd and bcp must usually be installed separately to get to your data. The official Docker SQL Server image (from CTP2) comes with mssql-tools pre-installed. (found in /opt/mssql-tools/bin inside the container).

The mssql-conf utility comes with every SQL for Linux. Whether or not the tools are installed. Indeed mssql-conf can come in handy to get things set up in the file system. It must also be good for start-up trace flag configurations and tls?

One of the first things you may want to do upon installation is look at the SQL Server errorlog suggesting cat, tail and head could also be counted among the out of the box tools at your disposal as well as what-ever text editors are available on the server (e.g. EMACS, Gedit, Nano, Kwrite, LibreOffice, Bluefish, Atom, vi, etc.) and sudo/root level access to the SQL Server files. Bluefish and Atom are IDEs with some OK support for SQL syntax giving you an ability to review and modify your library of SQL scripts with autocomplete color with formatting to highlight SQL syntax. In slight conflict with my earlier mild disdain for openSUSE’s KDE desktop, even Kwrite is not unusable as a sql editor. One side effect for authors of ad hoc queries is that you can end up flopping between apps to toggle between building the queries, running the queries and seeing the results. Even to correct a silly typo…

When using the SQL Server on Linux image from Dockerhub.com, you will want to include the Docker CLI in your tool set. The Docker CLI is used to start and stop the SQL Server Container rather than stopping and starting the SQL Server. It is still possible to stop only the SQL Server service of course, albeit a bit convoluted. You open an interactive shell inside the running container to get to the mssql-tools included in the image. All mssql-conf work for that image is done inside the running container (that is, local to the SQL instance).

> docker exec –interactive –tty <container-id> /bin/bash

The more stable and deploy-able way to create SQL Server meta-data or move data in bulk has always been by script. SQL meta-data as well as routine and repetitive tasks expressed as sqlcmd batch scripts are powerful and generally portable – from one SQL Server to the next anyway. For the most part anyway, a T-SQL script that works with SQL Server on a Windows server also works with SQL Server on Linux. Few things could so discouragingly encourage scripted work more than a sqlcmd prompt staring back at you.

With the CTP there remain a few unseen obstructions in the port and probably will always be a potential for annoying cross-platform inconsistencies. SQL Trace and Extended Event data, for example, are written to the file system at /var/opt/mssql/log by the SQL Server on Linux but are so useless from the Linux console. Extended Store Procedures (xp_) that touch the file system are generally not working (yet?). Also, when script file archives are transferred to the Linux system weird things can happen like symmetric uni-code glyphs replacing ASCII quotes causing working and tested scripts to fail upon db engine parse because of the tilt direction of the quotation marks. The fix is usually to not copy scripts in to Linux in that way that introduced the inconsistencies.

Writing ad hoc queries when debugging or researching an issue with sqlcmd can be exasperating. Especially when the query doesn’t fit nicely on one line. Sure sqlcmd understands line feeds, but once you push enter to create a line, there is no changing it. A batch that can be edited regardless the number lines in that batch would be preferable. As a compromise, use the -input (-i) sqlcmd command line switch option and a collection of scripts. As previously mentioned, an editor like Atom or Bluefish provide enhanced query author support. Note that script libraries stored remotely from the SQL Server create a remote dependency that may be better avoided for improved availability at critical times. Better to keep essential scripts at the server and maintain the master copy in a source control.

sql errorlog

When the Docker container for a SQL Server instance is created including a persisted volume – or volumes if only to project hopefulness – there are different choices for reviewing the SQL errorlog than when you need to get in the container in order to see any of SQL Server’s output files. Fortunately, everything is still in the familiar mssql/logs subdirectory by default regardless in which case you find yourself. One difference is, that you will need to add the step of opening an interactive shell to work with the files inside the container. You also need to be vigilant of the volume being unexpectedly dropped from sync with the container during some ‘maintenance’ operation or another. And in either scenario, you can also look at the Docker logs to query all the log records available to Docker Engine. Turns out the Docker log IS the collection SQL error logs all in one view.

> docker container logs <container-id>

The command includes a tail like set of options to filter the log records returned either by file position or timestamp. As mentioned previously, several tools are included with Linux that will vary somewhat by builder. I don’t think you will find one that does not include cat. The ‘official’ SQL Server is built on Ubuntu 16.04 indicating that the GNU coreutils head and tail commands are in /usr/bin. I believe that may be true for all three supported Linux flavors. You are root when you open an interactive bash shell inside a container. For a package installed SQL Server, you will likely need root or sudo rights to access programs in /usr/bin

> docker container logs --help
Usage: docker container logs [OPTIONS] CONTAINER

Fetch the logs of a container

Options:
 --details Show extra details provided to logs
 -f, --follow Follow log output
 --help Print usage
 --since string Show logs since timestamp
 --tail string Number of lines to show from the end of the logs (default "all")
 -t, --timestamps Show timestamps

Mssql-conf:

The mssql-conf utility exposes settings comparable to editable settings in the service start-up information of a SQL Server instance running on Windows. Various database file locations and properties can be set and SQL Server trace flags can be enabled/disabled. mssql-conf is a work in progress and documented with the most recently released changes at https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-configure-mssql-conf.

root@63fe05e40911:/# /opt/mssql/bin/mssql-conf
 usage: mssql-conf [-h] [-n]  ...

positional arguments:

setup Initialize and setup Microsoft SQL Server
 set Set the value of a setting
 unset Unset the value of a setting
 list List the supported settings
 traceflag Enable/disable one or more traceflags
 set-sa-password Set the system administrator (SA) password
 set-collation Set the collation of system databases
 validate Validate the configuration file

optional arguments:
 -h, --help show this help message and exit
 -n, --noprompt Does not prompt the user and uses environment variables only
 defaults.
root@63fe05e40911:/# mssql-conf list   
 network.tcpport TCP port for incoming connections
 network.ipaddress IP address for incoming connections
 filelocation.defaultbackupdir Default directory for backup files
 filelocation.defaultdumpdir Default directory for crash dump files
 traceflag.traceflag Trace flag settings
 filelocation.defaultlogdir Default directory for error log files
 filelocation.defaultdatadir Default directory for data files
 hadr.hadrenabled Allow SQL Server to use availability group...
 coredump.coredumptype Core dump type to capture: mini, miniplus,...
 coredump.captureminiandfull Capture both mini and full core dumps
 network.forceencryption Force encryption of incoming client connec...
 network.tlscert Path to certificate file for encrypting in...
 network.tlskey Path to private key file for encrypting in...

network.tlsprotocols TLS protocol versions allowed for encrypte...
 network.tlsciphers TLS ciphers allowed for encrypted incoming...
 language.lcid Locale identifier for SQL Server to use (e...
 sqlagent.errorlogginglevel 1=Errors, 2=Warnings, 4=Info
 sqlagent.errorlogfile SQL Agent log file path
 sqlagent.databasemailprofile SQL Agent Database Mail profile name

When using the Docker Image, use the Docker interactive prompt command previously described to get to the utility from a shell on the host. When the mssql volume is persisted on the Docker host, the mssql.conf configuration file will be viewable directly from the host, but access to the utility to make any changes must be done using the mssql-conf tools found only within the container.

NPM

The name ‘mssql’ is somewhat ambiguous. It is the name of the sub-folder where SQL errorlogs, data files and the mssql.conf configuration settings file are located, (/var/opt/mssql) as well as the /opt/mssql sub-folder where the sqlserver executable lives, This latter folder also includes a library of python files that comprise mssql-conf. Other than to mention them now to help compare the Linux and Windows contexts, none of these are what is meant in this post by references to mssql. Instead our references apply specifically to the open source Microsoft maintained node-mssql npm driver for SQL Server (https://github.com/patriksimek/node-mssql) that uses the Microsoft maintained tedious javascript TDS implementation (https://tediousjs.github.io/tedious/) from a Linux client machine to query any SQL Server.

mssql is the package name at npmjs.org and in common usage as well, but the package source follows a deprecated naming convention with a ‘node’ prefix for the github.com repository name. The truth is, the user never has to explicitly invoke node.js when this package is installed globally and used at the bash prompt. I acts pretty much like any other command-line binary.

node-mssql

When the MIT licensed mssql npm package is installed globally and an appropriate /.mssql.json’ configuration file has been created, queries can be piped through tedious to the SQL Server directly – if a bit awkwardly – from the bash prompt. To get this set up, assuming node.js is already on the machine:

1. Install mssql globally:

> sudo npm install -g mssql

2. Create an ‘/.mssql.json’ configuration file somewhere in your path (or type this connection information object as an argument to mssql every time you use it, your choice).

> echo '{ "user": "sa", "password": "<YourStrongPassw0rd>", "server": "localhost", "database": "AdventureWorks2014"}` > .mssql.json

Or, for a Docker SQL Server container from the Docker host (just like any other SQL Server available on the network at port 1433):

> echo '{ "user": "sa", "password": "<YourStrongPassw0rd>", "server": "172.17.0.1", "database": "AdventureWorks2014"}` > .mssql.json

3. Pipe a query in, get a consistently formatted JSON array of query result out

> echo "select name from sysdatabases"|mssql /home/bwunder/.mssql.json

or – if the .json file’s path is in $PATH or the file is in the current directory use the shorthand:

> echo "select * from sys.databases where name=master" | mssql

But double quotes wrapped in single quotes will fail:

> echo ‘select * from sysdatabases where name=”master"’ | mssql

And multil-line queries break when the author enters that first line-feed.

*Important Note: Installing both the sql-cli package mentioned below and the node-mssql package mentioned above globally creates a global name conflict – the last one installed becomes known as mssql globally at the bash prompt. The last package (re)installed always wins: Installing either package with the -g switch breaks the other even when that other is already installed and working but it is easy enough (thought not a good practice I suspect) to toggle to and fro explicitly with another install of the one you desire now. Internally, sql-cli requires mssql. An npm uninstall of mssql when sql-cli was the last of the two to be installed produces the insightful warning:

bwunder@linux-niun:~> sudo npm uninstall -g mssql
 root's password:
 npm WARN gentlyRm not removing /usr/local/bin/mssql as it wasn't installed by /usr/local/lib/node_modules/mssql

sql-cli

When, instead*, the Apache licensed sql-cli NPM package is installed globally you get an interactive query engine client out of the box. With this package, when you hit enter – each and every time you hit enter – what you typed goes to the SQL Server. Like mssql, you only get one line of typed query text per round trip to the SQL Server. And as with sqlcmd, however, submission by script file, does not carry a limitation on line feeds allowed within the script. It is only the ad hoc command line usage that is limited to query tweets.

~> sudo npm install -g sql-cli
~> mssql -s 172.17.0.1 -u sa -p '<yourstrong!passw0rd>' -d AdventureWorks2014</yourstrong!passw0rd>
 Connecting to 172.17.0.1...done 

sql-cli version 0.6.2
 Enter ".help" for usage hints.
 mssql> .help 
 command             description
 ------------------  ------------------------------------------------
 .help               Shows this message
 .databases          Lists all the databases
 .tables             Lists all the tables
 .sprocs             Lists all the stored procedures
 .search TYPE VALUE  Searches for a value of specific type (col|text)
 .indexes TABLE      Lists all the indexes of a table
 .read FILENAME      Execute commands in a file
 .run FILENAME       Execute the file as a sql script
 .schema TABLE       Shows the schema of a table
 .analyze            Analyzes the database for missing indexes.
 .quit               Exit the cli
mssql> select name from sysdatabases

name
 ------------------
 master
 tempdb
 model
 msdb
 sqlpal
 AdventureWorks2014

6 row(s) returned

Executed in 1 ms
 mssql> .quit

sql-cli also:

  • has a helpful, but limited set of command options for running a script or browsing the catalog with comfortably formatted tabular query results.
  • does indeed depend on mssql but probably not on the latest release of mssql.
  • Includes a few catalog CLI dot commands (e.g., .help) useful to the query developer.
  • works like an sqlcmd -q interactive session except that you never have to type GO in order to send a batch to the SQL Server like you must when using sqlcmd interactively.

The big loss when using sql-cli is bash. That is, the user must leave the sqlcli process in order to work at the bash prompt. With mssql the user is always at the bash prompt.

~

Multi-line ad hoc T-SQL queries are mostly not meant to happen with mssql or sql-cli. sqlcmd will at least let you enter them, even if only in an awkward and not editable buffered line reader kind of way. You will have to decide which you prefer else toggle as needed. When properly configured in an Active Directory Domain, mssql, sql-cli, sqlcmd and bcp on a Linux client should be able to connect to and query a SQL Server on any Windows or Linux Server available to it in that Domain.

Chances are quite good that your node.js oriented tools will be using mssql elsewhere. However, sql-cli feels like a somewhat more useful console command line given the handful of catalog and query tools that come with the interface.

http://

sqlpad

If there is a Chrome browser on the server, sqlpad is a great query tool built for and upon the mssql package. Firefox and sqlpad don’t seem to get along. The sqlpad 2.2.0 NPM package is using mssql v “^3.0.0.0” according to its package.json. Some newer stuff may be missing but the query tool is a pleasure compared to the command prompt and tabular results are oh so much nicer in the DOM in a browser than when echo’d to a terminal. From the drop-down intelli-sence style autocomplete presenting database catalog objects/property selections in the query window hints , a query UI can’t get much easier to work with than sqlpad.

~> sqlpad --dir ./sqlpaddata --ip 127.0.0.1 --port 3000 --passphrase secr3t admin bwunder@yahoo.com --save
 Saving your configuration.
 Next time just run 'sqlpad' and this config will be loaded.
 Config Values:
 { ip: '127.0.0.1',
  port: 3000,
  httpsPort: 443,
  dbPath: '/home/bwunder/sqlpaddata',
  baseUrl: '',
  passphrase: 'secr3t',
  certPassphrase: 'No cert',
  keyPath: '',
  certPath: '',
  admin: 'bwunder@yahoo.com',
  debug: true,
  googleClientId: '',
  googleClientSecret: '',
  disableUserpassAuth: false,
  allowCsvDownload: true,
  editorWordWrap: false,
  queryResultMaxRows: 50000,
  slackWebhook: '',
  showSchemaCopyButton: false,
  tableChartLinksRequireAuth: true,
  publicUrl: '',
  smtpFrom: '',
  smtpHost: '',
  smtpPort: '',
  smtpSecure: true,
  smtpUser: '',
  smtpPassword: '',
  whitelistedDomains: '' }
 Loading users..
 Loading connections..
 Loading queries..
 Loading cache..
 Loading config..
 Migrating schema to v1
 Migrating schema to v2
 Migrating schema to v3 

bwunder@yahoo.com has been whitelisted with admin access.

Please visit http://localhost:3000/signup/ to complete registration.
 Launching server WITHOUT SSL

Welcome to SQLPad!. Visit http://127.0.0.1:3000 to get started

The browser page opens to a login screen. You must create a sqlpad user (sign up) before you can sign in. SQL password entry will come after you sign in when you create connections.

You get some charting abilities with sqlpad too. Nothing fancy, but useful for an aggregate IO, CPU or storage time-series visualization to be shared with colleagues.

sqlpad does seem to loose it’s mind now and again – I don’t know, maybe it’s just me -, but a restart of the app seems to make everybody happy again. SOP

~

It may behoove anyone responsible for a SQL Server for Linux instance to get familiar with sqlpad, mssql and sql-cli as well as rediscover SQLCMD and BCP at the command line, master mssql-conf and find all the tools you can like sqlpad. This is true even if/when the intention is to always use the Microsoft supported Windows client UI (SSMS, Visual Data Tools, etc.,) for all development, testing and database administration. The thing is, we can never know when access at the server console will suddenly become the most – perhaps only – expedient option for an urgent – perhaps essential – access to any given SQL Server instance until that moment is at hand.

Personally, I like sqlpad but don’t really care much for the terminal query experience in any flavor I have tried should no browser be handy. In truth, I had built my own console tool that would at least accept a multi-line query that can be edited, then executed either with sqlcmd over ODBC to get a tabular result or with mssql v4.x over tedious to get JSON results before I ‘found’ sqlpad. I affectionately named my tool sqlpal long ago. That was supposed to represent it’s foundations in SQL Server and the Vorpal node.js CLI from NPM. Get it? sql+pal? Then I started seeing references to SQL-PAL and PAL in the Microsoft vendor documentation concerning the CTP. Turns out SQL-PAL is the name for the interop layer between SQL Server and the Linux kernel or some such thing. Sorry for any confusion.

Sqlpal also includes some core Docker management automation plus the Vantage wrapper for Vorpal that enables a remote ssh like encrypted peer to peer connectivity and locally managed IP firewall among other things. Feel free to check it out if you want your SQL Server CTP to run in a Container: https://www.github.com/bwunder/sqlpal. Docker unfolds a SQL Server for Linux development platform for folks that already know SQL Server but may not necessarily have worked with Linux or Docker much… yet.

Given a bit more refinement of the CTP, SQL Server for Linux in a Container could even perform adequately behind many (most?) apps when released for production use in the near future. In the mean time, I’m going to investigate any possibility to combine sqlpad for the browser and it’s authentication protocols with sqlpal’s Batch caching, query object and script folder hooks for the bash shell because I need something to do.

Posted in node.js | Leave a comment

ballad for a data miner

First you save some tuples, then you lose your scruples
people help you along your way, not for your deeds, but how you say
to take their privacy and freedom, just remind ’em they don’t pay
take their money too on a ploy, use what you’ve learned from loggin’ all day

canto:
Cartesian aggregations, Poisson Distributions
causal correlations and standard deviations
You can’t sell your sexy underwear to little kids who watch TV bears
unless your cuddly cartoon cub convinces them that mom…. won’t… mind…
(con variazioni: money… is… love…, wrong… is… right…, yes… means… no…)

Analyzing, calculating, place your bets, stop salivating
map reduce then slice and dice, this new ‘gorithm is twice as nice
money making, manipulating, paying to play and kick-back taking
suffering fools but taking their payoffs, while spying on staff to cherry-pick the lay-offs

canto

Watching out for number one, what the hell, its too much fun,
to those that worked so tirelessly, Thank You Suckers! but no more pay
So many losers along the way, “stupid people” getting in your way
Is that an angry mob in your pocket? Is that a golden fob that you got on it?

canto

This corporation’s got no conscience and global economies won’t scale
when free is just a profit center and people cheap commodities
who’s choices are all illusions, fed by greed and false conclusion,
who’s purpose is to do your bid, fight pointless wars and clean your crapers, please

canto

vamp till que
First you save some tuples then you lose your scruples
People help you on your way not by your words but how you say

finale
Spent his last day in hole
won’t be going down any more
work for the man, live while you can
won’t be to long before your bound from this land

Posted in Privacy | Leave a comment

ad hoc T-SQL via TLS (SSL): Almost Perfect Forward Secrecy

The day the Heartbleed OpenSSL ‘vulnerability’ [don’t they mean backdoor?] hits the newswires seems an ideal moment to bring up an easy way to wrap your query results in an SSL tunnel between the database server and where ever you happen to be with what-ever device you happen to have available using node.js. (also see previous post re: making hay from you know what with node.js. And PLEASE consider this post as encouragement to urgently upgrade to OpenSSL 1.0.1g without delay! – uh..update March 3, 2015: make that 1.0.1k – see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-02 – and/or node.js v0.10.36 see https://strongloop.com/strongblog/are-node-and-io-js-affected-by-the-freak-attack-openssl-vulnerability/ – geez, will the NSA ever come clean on all the back doors they own?)

Heartbleed is clearly the disclosure of a probably intentional free swinging back door in open source software being poorly disguised as a vulnerability discovered after years in the wild. I’m afraid, “Oh, gee, I forgot to test that…” just doesn’t cut it when you talking about OpenSSL. That just made every one of us that has been advocating for open source software as a pathway toward the restoration of secure computing and personal privacy look like feckless dumb shits: as big o’ fools as those politicians from the apropos other party… You know who I talking about – the dumb-as-craps and the repugnant-ones or something like that… All classic examples of puppet politicians – as are we puppet software engineers – mindlessly serving the ‘good enough’ mentality demanded of today’s necessarily exuberant and young software engineers and as has been incumbent upon politicians throughout the times as humanity slogs this now clearly unacceptable – but nicely profitable for a few – course we travel… toward some glorious and grand self-annihilation – so they need us to believe anyway to justify the terminal damage they inflict upon the planet for self-profit.

In my estimation, the only lesson that will be learnt by proprietary software vendors and open source communities alike from the cardiac damage that OpenSSL is about to endure as a result of this little old bleeding heart will be to never admit anything. Ever. Some things never change.

OpenSSL just might not survive without the accountability that is established through full disclosure – at least about what really happened here but preferably as a community. Preferably a disclosure to provide compelling evidence that nothing else so sinister is yet being concealed. I doubt that can happen without full and immediate disclosure from every individual involved in every design decision and test automation script implemented or used during the creation, development and community review of that software. And I doubt any software organization or community would be able to really come clean about this one because – and I admit this is opinion based mostly on how I have seen the world go ’round over the last 60 years – maybe even a community building foundation of open source software such as OpenSSL can be ‘persuaded’ to submit to governmental demands and somehow also remain bound to an organizational silence on the matter? Prepare yourselves for another doozy from one of the grand pooh-bah – and real bad liars – from the NSA before all is said and done on this one.

May 9, 2014 – So far General Clapper has delivered as expected. On the tails of his April Fools Day admission of what we already knew: the NSA has conducted mass surveillance of  American Citizens without warrant or suspicion for quite a while, he first denied having ever exploited the OpenSSL buffer back door in a bald face lie that he stuck with for maybe a week or three, and now he is merely reiterating on older, but extremely disturbing, tactical right he has claimed before for the NSA to not reveal to even American and ally owners or to American and ally maintainers of open source code or hardware any exploitable bugs known by the NSA. All the owners and maintainers get to know about are the backdoors that were coerced to willingly implement. That is just plain outrageous. A standard for tyranny is established. I guess we should be at least glad that the pooh-bah has been willing to share his despotic rule – at least in public – with first “W” and then Bronco. Hell, Bronco even got us to believe that keeping the pooh-bah on his throne was a presidential decision. We will have to wait and see if he can tolerate Monica Bengazi I reckon.

I wonder if we will ever hear that admission of the ultimate obvious truth that the NSA is covertly responsible for the existence of the OpenSSL back door? This must scare the hell out of Clapper’s inner circle – whoever they might be? Once they are forced to admit the first backdoor it won’t be long before the other US Government mandated back doors to our privacy begin to surface and close. I have no doubt there will be a whole lot more colluding public corporations than just Microsoft, Apple and Google. I know it’s deep and ugly, but honestly have no idea just how deep and ugly. All I can see clearly is that there must be a good reason our Government has made such a big deal out of unrevealed backdoors planted for the Chinese Government in Hauwei’s network hardware…


I made the claim in the title that this technique is using ad hoc queries. That needs some qualification. Queries in the example code below are submitted asynchronously by a node.js https server running at the database server. The query is not exactly ad hoc because you must place the SQL in a text file for use by the node.js https server before starting the node server then you can execute the query from any browser with an IP path to the node server. While there is always a way to get to the text file and edit the query if need be, the idea described here is more useful for those ad hoc queries you run a few times over a few hours or days to keep an eye on something, then might never use again. The https server would only be of importance if there were sensitive data in the query results and you wished to avoid serving it on the network as clear text. If that is true, then the user interface you normally use is a better option where-ever you can use it. The node server lets you see the query result from any device with a browser, or from a ‘private’ browser session of someone else’s device with a browser.

SQLbySSL

An OpenSSL generated key for self-signing else a CA signed certificate on the database server is required before starting node. You could install the key and certificate in the local key repository, but that is not the method used here. Instead, a key and a certificate signing request are generated with OpenSSL. The key and self signed cert are kept in node.js server’s root folder. You may need to ignore an “unable to write ‘random state'” message from OpenSSL from key(1) and cert(3) generation. Keep in mind that when using a self signed certificate you must also click thru a browser warning informing you that the certificate is not signed by certificate authority (CA). A few modern browsers will not allow you to click through this screen so will not work here – stock Chrome, Firefox, Android and Safari work just fine. Also keep in mind that anyone that can get your key and certificate can decipher a cached copy of any bits you shoved through SSL tunnels built with that key and certificate. Guard that key closely.

three ways to self signed certificate that will encrypt a TLS1.2 tunnel
1. prompt for file encryption phrases and distinguished name keys
  //genrsa and similar are superceeded by genpkey openssl genrsa -out key.pem 1024
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024
  openssl req -new -key key.pem -out request.csr
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

2. no prompts - your distinguished name (DN)
  openssl genpkey -algorithm RSA -out key.pem -pkeyopt rsa_keygen_bits:1024 -pass pass:keyFileSecret
  openssl req -new -key key.pem -passin pass:keyFileSecret -out request.csr -passout pass:certFileSecret -subj/DC=org/DC=YABVE/DC=users/UID=123456+CN=bwunder -multivalue-rdn
  openssl x509 -req -in request.csr -signkey key.pem -out cert.pem 

3. one command - no request file - no prompts
  openssl req -x509 -newkey rsa:1024 -keyout key.pem -out cert.pem -passin pass:keyFileSecret -passout pass:certFileSecret -days 1 -batch 

The key used to generate the request is used to sign the request certificate . Certificate and key are saved as .pem files in the node.js server folder. You could even roll-ur-own perfect forward secrecy. That is to say, automate the generate and signing of a new key before every request. Not quite perfect but this could allow you to keep going in ‘manual mode’ with or without an urgent upgrade to close a risk that is not considered a risk when using perfect forward secrecy – at least until perfect forward secrecy is rendered ineffective in a few years.

Adding the one command key generation as the “prestart” script in the node’s  package.json will get you a new key each time you start the nodejs server.

You may need to allow inbound TCP traffic on the port serving SSL pages (8124 in the example) in your firewall if you want to hit the query from your smart phone’s browser or any remote workstation that can ping the database server on the port assigned to the https server – and present your Windows domain credentials for authentication unless you hardcode a SQL login username/password in the connection string (not recommended).

Speaking of which, Edge expects to find a connection string to the SQL Server in an environment variable where the node.exe is called before the node process thread is started.

SET EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=tempdb;Integrated Security=True

Lastly, when the node server is started you will be prompted at the console to enter a PEM password. It is not clear from the prompt but this is the phrase you used to encrypt the certificate file. I used ‘certFileSecret’ in the example above.

Happy Heartbleed day!


/*
  npm install edge
  npm install edge-sql
*/
var edge =require('edge');
var sys =require('sys');
var https =require('https');
var fs =require('fs');

var port = 8124;
var options = {
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
};

var sqlQuery = edge.func('sql', function () {/*
  SELECT top(10) qs.total_worker_time AS [total worker time]
     , qs.total_worker_time/qs.execution_count AS [average worker time]
     , qs.execution_count AS [execution count]
     , REPLACE(
         SUBSTRING( st.text
                  , ( qs.statement_start_offset / 2 ) + 1
                  , ( ( CASE qs.statement_end_offset
                        WHEN -1 THEN DATALENGTH( st.text )
                        ELSE qs.statement_end_offset
                        END - qs.statement_start_offset ) / 2 ) + 1 )
         , CHAR(9)
         , SPACE(2) ) AS [query text]
  FROM sys.dm_exec_query_stats AS qs
  CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
  ORDER BY total_worker_time DESC;
*/});

// listens for query results to dump to a table
https.createServer( options, function( request, response ) {
  sys.log('request: ' + request.url );
  if ( request.url==='/' ) {
    sqlQuery( null, function( error, result ) {
      if ( error ) throw error;
      if ( result ) {
        response.writeHead( 200, {'Content-Type': 'text/html'} );
        response.write('&lt;!DOCTYPE html&gt;');
        response.write('&lt;html&gt;');
        response.write('&lt;head&gt;');
        response.write('&lt;title&gt;SQLHawgs&lt;/title&gt;');
        response.write('&lt;/head&gt;');
        response.write('&lt;body&gt;');
        response.write('&lt;table border=&quot;1&quot;&gt;');
        if ( sys.isArray(result) )  {
          response.write('&lt;tr&gt;');
          Object.keys( result[0] ).forEach( function( key ) {
            response.write('&lt;th&gt;' + key + '&lt;/th&gt;');
          });
          response.write('&lt;/tr&gt;');
          result.forEach( function( row ) {
            response.write('&lt;tr&gt;')
            Object.keys( row ).forEach( function( key ) {
              if (typeof row[key]==='string'&amp;&amp;row[key].length &gt;=40 ) {
                response.write('&lt;textarea DISABLED&gt;' + row[key] + '');
              }
              else {
                response.write('&lt;td&gt;' + row[key] + '&lt;/td&gt;');
              }
            });
            response.write('&lt;/tr&gt;');
          });
        }
        else  {
          Object.keys( result[0] ).forEach( function( key ) {
            response.write( '&lt;tr&gt;&lt;td&gt;' + key + '&lt;/td&gt;&lt;td&gt;' + result[0][key] + '&lt;/tr&gt;');
          });
        }
        response.write( '&lt;/table&gt;' );
        response.write( '&lt;/body&gt;' );
        response.write( '&lt;/html&gt;' );
        response.end();
      }
      sys.log(&quot;rows returned &quot; + result.length)
    });
  }
}).listen(port);

sys.log('listening for https requests on port ' + port);

Posted in Privacy, Secure Data | 3 Comments

Making JSON Hay out of SQL Server Data

Moving data in and out of a relational database is a relentless run-time bottleneck. I suspect you would agree that effecting metadata change at the database is even more disruptive than the run of the mill CRUD. I often hear the same [straw] arguments for a new cloud vendor or new hardware or new skill set or a code rewrite to relieve throughput bottlenecks. But what if what you really need is a new data store model? What if you have been building, and rebuilding, fancy databases as cheaply as possible on scenic oceanfront property beneath a high muddy bluff across the delta of a rushing river from a massive smoking volcano in the rain? That is to say, maybe an RDBMS is not quite the right place to put your data all [most?] of the time? Maybe… just maybe… SQL Server – and Oracle and PostgreSQL – are passé and the extant justifications for normalization of data are now but archaic specks disappearing into the vortex of the black hole that is Moore’s Law?

On the off chance that there is some truth to that notion I figure it behooves us to at least be aware of the alternatives as they gain some popularity. I personally enjoy trying new stuff. I prefer to take enough time examining a thing so that what I am doing with it makes sense to me. In late 2012 the open source MongoDB project caught my attention. I was almost immediately surprised by what I found. Intelligent sharding right out of the box for starters. And MongoDB could replicate and/or shard between a database instance running on Windows and a database instance running on Linux for instance, or Android or OSX [or Arduino or LONworks?]. And there was shard aware and array element aware b-tree indexing, and db.Collection.stats() – akin to SQL Server’s SHOWPLAN. Even shard aware Map-Reduce aggregations so the shards can be so easily and properly distributed across an HDFS – or intercontinental cluster for that matter – with ease. And tools to tune queries!  I was hooked in short order on the usability and the possibilities so I dug in to better understand the best thing since sliced bread.

The “mongo shell” – used for configuration, administration and ad hoc queries – lives on an exclusive diet of javascript. Equally easy to use API drivers are available from MongoDB.org for Python, Ruby, PHP, Scala, C, C++, C#, Java, Perl, Erlang and Haskell. There is more to the API than you find with the Windows Azure or AWS storage object, or Cassandra or SQLite for that matter, but still not as much complexity for the developer or waiting for results for the user as is invariably encountered with relational models.

In the course(s) of learning about the API and struggling to remembering all the things I never knew about javascript and the precious few things I never knew I knew about javascript I found myself working with – and schooling myself on – node.js (node). Node is a non-blocking single threaded workhorse suitable for administrative work and operational monitoring of servers, smartphones, switches, clouds and ‘the Internet of things‘.  The mongo shell is still the right tool for configuration, indexing,  testing and most janitoral grunt work at the database. Unlike node, the shell is not async by default and not all of the lowlevel power of the mongo shell is exposed through the APIs. Nonetheless, node uses the native javascript MongoDB API. And I must say that having the application and the db console in the same language and using the exact same data structures is huge for productivity. Minimal impedence for the DBA to force the mental shift between mongo server and node callbacks. Virtually no impedance for the developer to mentally shift between app layers and data layers!

Perhaps node.js is a seriously excellent cross platform, cross device administrative tool as I believe, but I can only guarantee that it is fun. It is an open source environment with potential beyond any I can imagine for Powershell or ssh. Packages that expose functionaity through javascript libraries and written in  C, C++, C#, Java and/or Python   exist for connection to node. node makes no bones that MongoDB is the preferred data store. I am by no means a data store connoisseur though I have snacked at the NoSQL corner store and stood in the Linux lunch line often enough to feel entitled to an opinion. You’ll have to take it from there.

FWIW: 10gen.com, MongoDB’s sponsor corp has a real good 6 lesson course on-line for kinesthetic learners that will give you a journeyman skills with MongoDB and get you started with node.js. Mostly hands on so there is a lot of homework. And its free.

Update April 22, 2015: While checking out the otherwise mediocre Node js Jump Start course on the Microsoft Virtual Academy I did come upon another resource that might be better suited to the visual learner: The Little MongoDb Book by Carl Seguin available as a no-cost pdf – or in low-cost hardcopy at your favorite book store. 

To otherwise help ease your introduction – if you decide to kick the MongoDB tires using node.js and you are a SQL DBA – I provide an example data migration below moving a SQL Server model into a MongoDB document that you can easily set up locally or modify to suit your data. Most of the work will be completing the download and installation of the few open source software libraries required.

For data here I use the simplified products table hierarchy from the AdventureWorksLT2012 sample database available from codeplex. product is easily recognizable as what I will call the unit of aggregation.

The unit of aggregation is all data that describes an atomic application object or entity. From the already abstracted relational perspective one could think of the unit of aggregation as everything about an entity de-normalized into one row in one table. In practice, many relational data models have already suffered this fate to one extent or another.      

In the AdventureWorksLT database I see three candidates for unit of aggregation: customers (4 tables), products (6 tables) and Sales (2 tables, parent-child – the child is probably the unit of aggregation). Product is interesting because there are nested arrays (1 to many relationships) and a grouping hierarchy (category). Here is a diagram of the SQL Server data:

ProductsDbDiagram

This is loaded to into a collection of JSON documents of products with the following JSON format. The value (right) side of each ‘name -value’ pair in the document indicates the table and column to be used from the SQL data.

[ 
  {
    _id : Product.ProductID,
    Name: Product.Name,
    ProductNumber: Product.ProductNumber,
    Color: Product.Color,
    StandardCost: Product.StandardCost,
    ListPrice: Product.ListPrice,
    Size: Product.Size,
    Weight: Product.Weight,
    SellStartDate: Product.SellStartDate,
    SellEndDate: Product.SellEndDate,
    DiscontinuedDate: Product.DiscontinuedDate,
    ThumbNailPhoto: Product.ThumbNailPhoto,
    ThumbNailPhotoFileName: Product.ThumbNailPhotoFileName,
    rowguid: Product.rowguid,	
    ModifiedDate: Product.ModifiedDate,
    category: 
      {
        ProductCategoryID: ProductCategory.ProductCategoryID,
        ParentProductCategoryID : ProductCategory.ParentProductCategoryID,
        Name: ProductCategory.Name,
        ModifiedDate: ProductCategory.ModifiedDate 	
      }
    model:
      {
        ProductModelID: ProductModel.ProductModelID,
        Name: ProductModel.Name,
        CatalogDescription: ProductModel.CatalogDescription ,
        ModifiedDate: ProductModel.ModifiedDate 	
        descrs: 
          [
            {
              ProductDescriptionID: ProductModel.ProductDescriptionID, 			
              Culture: ProductModelProductDescription.Culture,
              Description: ProductDescription.Description,
              ModifiedDate: ProductDescription.ModifiedDate 	
            }
            ,{more descrs in the square bracketed array}... 
          ]
      }
   }
   ,{more products - it's an array too}... 
 ] 

The code is executed from a command prompt with the /nodejs/ directory in the environment path . I am using node (0.10.25) on Windows Server 2012 with SQL Server 2012 SP1 Developer Edition at the default location and MongoDB 2.2.1 already installed prior to installing node. SQL Server is running as a service and mongod is running from a command prompt. I am using only Windows Authentication. For SQL Server access I am using the edge and edge-sql npm packages. edge asynchronously marshals T-SQL through the local NET framework libraries and returns JSON but only works with Windows.

( April 3, 2017 update: have a look at the mssql package on NPM. Microsoft is the package owner and the output from the database is JSON by default. Works with Node.js on Linux, iOS and Windows )

edge-sql result sets come back to the javascript application as name-value pairs marshaled from a .NET ExpandoObject that looks and smells like JSON to me. The work left after the queries return results is merely to assemble the atomic data document from the pieces of relatioinal contention and shove it into a MongoDB collection. This all works great for now, but I am not totally convinced that edge will make the final cut. I will also warn you that if you decided to start adapting the script to another table hierarchy you are will be forced to also come to understand Closures and Scope in Javascript callbacks. I hope you do. It’s good stuff. Not very SQLish though.

/*
  npm install mongodb
  npm install edge
  npm install edge-sql

  edge expects valid SQLClient connection string in environment variable
  before node is started.
  EDGE_SQL_CONNECTION_STRING=Data Source=localhost;Initial Catalog=AdventureWorksLT;Integrated Security=True

  edge-sql is a home run when aiming SQL data at a JSON target because
  you just supply valid T-SQL and edge will return the ADO recordset
  as a JSON collection of row objects to the scope of the query callback.   	

  edge-sql for a production system is a strike out (w/backwards K)
  1. returns only the first result requested no matter how
    many results produced
  2. the javascript file containing edge.func text is vulnerable to
    SQL injection hijack by adding a semicolon followed by any valid T-SQL
    command or statement provided first word in edge.func callback comment
    is insert, update, delete or select (not case sensitive)
  STEEEEERIKE 3. the connection is made with the security context of the
    Windows user running the script so database permissions and data can
    be hijacked	through an attack on the file with an edge.func('sql')
*/

var edge = require('edge');
var mongoClient = require('mongodb').MongoClient;

var mongoURI = 'mongodb://localhost:27017/test';

// named function expressions (compile time)
// you paste tested SQL queries in each functions comment block
// edge parses it back out and executes as async ADO
var sqlProductCursor = edge.func( 'sql', function () {/*
  SELECT  ProductID
        , ProductCategoryID
        , ProductModelID
   FROM SalesLT.Product;
*/});
var sqlProduct = edge.func( 'sql', function () {/*
  SELECT  ProductID AS _id
        , Name
        , ProductNumber
        , Color
        , StandardCost
        , ListPrice
        , Size
        , Weight
        , SellStartDate
        , SellEndDate
        , DiscontinuedDate
        , ThumbnailPhoto -- varbinary MAX!
        , ThumbnailPhotoFileName
        , rowguid
        , ModifiedDate
  FROM SalesLT.Product
  WHERE ProductID = @ProductID;
*/});
var sqlProductCategory =  edge.func( 'sql', function () {/*
  SELECT ProductCategoryID
       , ParentProductCategoryID
       , Name
  FROM SalesLT.ProductCategory
    WHERE ProductCategoryID = @ProductCategoryID;
*/} );
var sqlProductModel = edge.func( 'sql', function () {/*
  SELECT ProductModelID
        , Name
        , CatalogDescription
        , ModifiedDate
  FROM SalesLT.ProductModel
  WHERE ProductModelID = @ProductModelID;
*/});
var sqlProductModelProductDescription =
  edge.func( 'sql', function () {/*
    SELECT 	pmpd.ProductDescriptionID
          , pmpd.Culture
          , pd.Description
          , pd.ModifiedDate
    FROM SalesLT.ProductModelProductDescription AS pmpd
    LEFT JOIN SalesLT.ProductDescription AS pd
    ON pmpd.ProductDescriptionID = pd.ProductDescriptionID
    WHERE ProductModelID = @ProductModelID;
*/});		

mongoClient.connect( mongoURI, function( error, db ) {
  db.collection('products').drop();
});	 

mongoClient.connect( mongoURI, function( error, db ) {
  sqlProductCursor( null, function( error, sqlp ) {
    if ( error ) throw error;
    for (var i=0; i &lt; sqlp.length; i++) {
      ( function (j) {
          sqlProduct (
            { "ProductID" : j.ProductID },
            function ( error, product ) {
              sqlProductCategory (
                { "ProductCategoryID" : j.ProductCategoryID },
                function ( error, category ) {
                  sqlProductModel (
                    { "ProductModelID" : j.ProductModelID },
                    function ( error, model ) {
                      sqlProductModelProductDescription (
                        {	"ProductModelID" : j.ProductModelID },
                        function ( error, descrs ) {
                          model[0].descrs = descrs;
                          product[0].category = category[0];
                          product[0].model = model[0];
                          db.collection('products').insert( product ,
                            function( error, inserted ) {
                              if (error) throw error;
                       		  });
                        });	// descrs
                    }); // model
                  }); // category
            }); // product
        })(sqlp[i]); // product closure
      }
    });
});	 

That’s all there is to it.

Posted in NoSQL | Leave a comment

Background Checks for EVERYBODY!

A background check simply filters and formats personal information about an eating, breathing person into a somewhat standard and therefore, presumably, useful “packet”. Much of the information in a background check is already out there in the public domain. Most of the rest is already controlled and/or owned by the government. What is missing is the true and just application of filters and formats – the so-called algorithms – needed to organize and maintain said “packets” as useful to humanity.  

There are algorithms in use. Primarily by marketers, but also governments and other well funded cartels, though we are offered no transparency, and the accounts that do manage to “leak” to within the public’s earshot suggest any claimed intention is at least dubious and too often, just another scene in a bad but never-ending slapstick of bafoonery.  The place(s) that sold and shipped thousands of rounds of ammo to a disturbed individual that would months later, commit mass murder after reaching out to mental health professionals for help. I can only wonder if the determination of the health care system and the public safety agency(s) to keep this guy off the street might have been enough to prevent the deaths of innocent movie-goers in Aurora Colorado.   

Anyone that has never done so might be shocked to see what anyone can learn about them at web sites that traffic in other peoples personal details. Anyone willing to pay the low, low price can look deeply into your background without your permission. To be sure, there are a number of web sites where anybody in the world can pay a few bitcoins to see a disturbing amount of information about you. That is to say: obtain your most vital personally identifiable information (PII) without leaving a trace. 

US Government agencies like the IRS, NSA, FBI, CIA and ATF; industrial surveillance engines like Google, Bing and Yahoo as well as the myriad of cookie powered marketing and transaction data crapitalists, not to mention the massive offline archives of banks, insurance companies, retailers and wholesalers that routinely accumulate giga-scads of data useful to check a person’s background. Ever so slowly, the players are sharing select bits of this information to make a buck but so far everyone still totally sucks at cooperating to render this information useful to humanity opting instead for self interests (e.g., profit, criminality, manipulation of the public, fear of reprisal, redress, revenge, etc…).

As I have previously blogged regarding Government Agency’s mandates for unfettered, unquestioned and, as I remain convinced, un-American access to ‘pen and tap’ data in every data center and central office in the country. What they cannot take for the asking or with a little sometimes heavy-handed coercion, they just take. Government regulations intended to help may be making the situation worse. HIPPA, for example, sorta standardizes health information and compels health care providers to store your medical records in a supposedly secure but professionally shareable electronic form. Combine that with the now widely suspected to be hacked by government agencies cryptography and the downstream government agencies are surely licking their chops over this pool of easy data and working quietly behind the scenes to make sure what-ever happens, the backdoor will always be open to them. Security is fine. Would be even better if it actually worked though, and we know this doesn’t… So why isn’t there a plan to be of genuine service to the people this data is about? That’s all I want to know.

Government data proper can often be classified “public information”. Yet we the public have to know who to ask or pay – and too often it seems the secret word and/or the appropriate political alignment – to see it. Even among and between government agencies there exists no mandate to transparently share data.

Stores routinely collect video surveillance and purchase transaction data. Each will process, archive, aggregate and perhaps share the collected data in what-ever way has been decided by the management.

Many 24-7 news operations exist at the local, national and international levels.

The data is already out there. What is missing is a way of using this already collected and in many cases even already aggregated data so that the right decisions can be made at the right time. Even worse, the impetus for politicians remains too much about what not to tell the constituency. The real issue here is transparency not a fair and just background checking algorithm. The people calling the shots don’t like the heat in that kitchen.

The Artificial Intelligence (AI) and Business Intelligence (BI) tools now in use in union with the data already being collected are enough to implement fair and just algorithms to determine such things as who should or should not buy a gun – and assist with implementation planning that doesn’t end in a shoot-out well before the first shot has been volleyed.

A background check must assure the greatest measure of accuracy, accountability and transparency possible from the body of information about a person that is already in the lawfully shareable domains. With transparency in background checks anyone and everyone must be able to find all background information about themselves, where that information came from, and a history of all previous reads. The person could then work to correct any errors – and weaknesses – that show up in their own background. A person would get a de facto background check every time they created an entry or update in the shareable data set. Furthermore, mental health warning signs could be followed up as thresholds of concern are in peril rather than as interventions of greater concern.

Make no mistake, EVERYBODY’s PII data is out there in the wild right now. The tyranny is that the data, who uses it and how it is used can be lawfully kept from that person’s view and beyond their control. It can easily be used to persuade or deceive that person and as easily be manipulated – by someone who may or may not know anything about that person – to deceive other’s.

Still and all, imagine how anyone might react when they ‘fail’ a background check when trying to buy a gun. Especially if they might already suffer with mental health issues or are already be wanted for past crimes. Point-of-sale background checks that actually do what they are intended to do would surely make a bad situation worse – at least some of the time – when doled out as a pass or fail ticket awarded to one citizen waiting in line but not the next, especially if the ‘fail’ meant “no gun today” and the persons brought the the intention to do harm along with them to the gun seller. Couple that with the facts that most gun buyer’s in the US today are not buying their first gun: they are already armed and that an unknown number of guns trade hands in private and possibly black market exchanges.  (When going after a problem where too many people are dying because too many people have gun, the most laughable solution is to give more people guns. I mean,… what could go wrong?)

Imagine, from yet another perspective, how it might affect the vote if voters could query  candidate background data only to find that a favorite politicians had failed to disclose a long history of mental illness or abuse or even which – if any – of the judges you are supposed to approve at each election were blatantly corrupt or a prescription drug abusers.

Imagine if those responsible for hiring our teachers and police had similar information when evaluating teaching candidates or even the longest tenured educators. And imagine we had the same information about teachers and police as the people that hire teachers and police? And teachers and police had the same information about us. And we had access to that information about our Doctor or car mechanic or date? Shouldn’t everyone be exposed to the same level of scrutiny? NRA spokes person Wayne La Pierre? President Barack Obama? You? Me?! It’s actually way too late to dicker over who should get such scrutiny. It’s happening now to everyone but not equally and without transparency or proper oversight. It’s also well protected by widespread denial.

Still and all, I do wonder what would become of those of us who don’t meet the background check sniff test at some point. After living in this society where people prefer not to know their neighbors all my life, that truth could be so disturbing that even greater chaos ensues. So many are now armed with assault weapons and so few police have the training and skills required to recognize let alone council a person safely through a mental health crisis that there probably is not a lot anyone can do about guns already in the wild for generations without something as drastic as a massive weaponized domestic drone campaign to ‘take away the guns’ from the cold dead droned hands of those labeled “should not have guns”. More realistically, mental health interventions will remain a point of difficulty that will require the minds of our best trained and most skillful scientists, clergy and communicators.

It is beyond belief that the political system has even been discussing a plan that could so obviously end with a “take his guns away” order followed by desperation and then, too often, a shoot-out. Ordered revocation will not work any better than chasing the homeless out of town (again) or the prohibition of alcohol or hemp.

And that seems to be where the conversation is now stuck for eternity. Some folks seem to believe that background checks are a waste of time and won’t help a thing. The other side says background checks are not enough and even more rules and regulations are necessary convinced that the type of weapon foretells a propensity for misuse. Meanwhile the politicians provide us the predictable dis-services of misinformation, stonewalling and feckless bullshit in the hope that nothing changes other than an ever increasing number of zeros in their bank balance. 

To get the conversation moving perhaps we have to stop blaming any narrow slice of the population (e.g., gun buyers, disturbed individuals, terrorists, tree huggers, religious fundamentalists or even corrupt politicians) as the source of a systemic problem and in so doing, erroneously see prevention as the elimination of certain stereotypical yet otherwise lawful abiding persons from the population. As far as I can tell, criminals, haters and lunatics will find ways to do their deeds whether or not they can legally buy a gun or even use a gun. We need a system that ameliorates the aberrant behaviors before there become headlines of death and disaster. Continuous cradle-to-grave background checks coupled with qualified trans-personal counselor to reach out to others regularly to help us understand what our background check is signaling is essential. Unfortunately, those in positions of power would place too much of their power at risk under such a structure simply because EVERYBODY has a few “red flags” in their background.  

The American public is apparently around 90% behind the need for better background checking of potential gun owners. Seems to me like everyone in the county is a potential gun owner and can easily circumvent ATF regulations just like always. Doesn’t that potential technically make everyone a candidate for a pre-purchase background check? And since anyone can buy a gun at any time isn’t it important to maintain that background check? It could be entirely possible to buy a gun anytime, and why on earth should we wait until someone is buying a gun to help them with the issues identified in their background data? Shouldn’t we be all we can to protect our communities from those risks easily uncovered through background check?

All political systems of our time may be already too corrupt to even give lip service to a solution based upon transparency, accountability and human dignity. That is but one disadvantage of the bought-and-paid-for oligarchies we now suffer and the financially driven, politically biased media that would so quickly loose ratings if accountability and transparency of background checks were the rule for everybody. Probably, politicians would be the biggest losers. A reasonable compromise among all or even most elected officials is not possible in this time when everything but Wikileaks happens behind closed doors. Background checks must be transparent to be beneficial to humanity. Backroom deals bargain away the posterity for all for the self interests of the politicians and their buddies.

Background checks are already as much a part of life as death and taxes. The problem is current background checks are sloppy, incomplete, inconsistent and highly susceptible to producing corrupt results. I bet city cops, for example, do get a much different background check than seasonal city workers. But not at all am I convinced the cop gets the better or more informative check of the two.

What we really need to do is agree on what needs to be in a background check and then go about the business of compiling and checking backgrounds consistently throughout the population. What ever it is, I have no doubt that the US Government already has more than enough access to personal information to do this work. The Generals in charge of this access appear to not even need the OK of legislators or citizens to do this work. But instead of doing this work we are witnessing a very different militarization of law enforcement in this county. A militarization apparently meant to entertain: to show the people they have nothing to fear through media plays of shock and awe or some such thing.  A militarization that has not been instrumental in making us any safer but showcases an impotent law enforcement supported mostly by clever videographers seeking to compose the most dangerous or scary video among the many videographers covering the situation and eloquent spokespersons to spin the truth in the desired direction.

Consider the Boston Bombing. Authorities failed in every way to identify or seek to isolate a risk of bags full of bombs scattered about the finish area as a real risk before the instant of attack. There was well known but not specific known risk, yet there was no preparation or screening to prevent such a simple attack. They relied upon video after the fact, collected tediously from each local business that had been previously compelled to install surveillance camera’s for self protection against [lesser?] crimes to identify the bombers. It took all the next day for authorities to filter the data and come up with a couple of freeze frames of video deemed ‘safe’ enough to release to the public. Then authorities had to crowd source the identity of those in the images – but just enough picture to get that name. All the while, authorities maintained a clear strangle-hold on the media covering the event. This demonstrates once again – and without any doubt – that the government has the technology and the data access to quickly and extensively check any person’s background once they have the person’s name. Unfortunately the late and well orchestrated photo release backfired when the bombers were shown that their covers ware blown and so thought it best to make a run for it. duh. That could have been the end of it but one of the two escaped after firing 200+ rounds plus lobbing two bomb in a battle with police instigated by the criminals en route out of town. The authorities then continued with a heavily armed show of military force as they spent a long hard fruitless day searching door to door to door in combat gear. The camouflage uniformed automatic weapon toting combat cops and armored and camouflaged vehicles in Boston were all over the TV in Colorado that day, all day. I was certainly hypnotized to and horrified by the media coverage. Finally the authorities called off the search empty handed at the end of the day and vacated a ‘sheltering in place’ order for a million people in Boston – even though it was not known to be any more or less safe. But involving the people was once again that little flash of transparency the situation needed. A citizen found the bomber in his back yard, well outside of the search area. Hours later the bomber was finally arrested – but only after government agencies had unloaded a couple of clips from police assault weapons into the hiding place of what turned out to be an already unarmed and already badly wounded criminal. All the next day the spokespeople of the various authorities took turns telling the TC camera what a great job the authorities had done.

Now that the event seems past I can only wonder how much faster this tragedy could have been brought to a conclusion if the police were transparent and accountable instead of operating as splintered secret military and para-military operations? Or will now do anything that might improve their chances of preventing bags full of bombs from being spread around the finish line of the next Boston Marathon?

I am certainly not trying to defend these bombers. I am saying that background checks for everybody already shows promise on those rare occasions when given a chance to work. And I’m saying that everyone already does get background checks that they are not aware. Granted, I’m talking about an explicit ongoing background check with complete transparency and full accountability. The government would at last get to stop pretending they do not invade the privacy of citizens inappropriately.

The development and recurring revenue potential for universal background checks alone may be among the most impactful governmental concessionaire’s opportunities of all time! Not that I advocate for a Big Brother state. I just think we ought to use the data already being collected and aggregated – and so therefore already mandated in accordance with CALEA to be available for the pleasure and [mis]use of law enforcement to also benefit those the data is about. Who better than those already playing by CALEA rules to lead the push? I want essentially for the access and knowledge already expended by the agencies and the corporations they do business with about my background to be extended also to me and to allow me to extend it to others I have determined have a bona fide reason to peer into my background  And of course, I want to know who looks at my background. I believe that the right way to ‘enforce’ background checks is through trans-personal counseling. If it all happens before a crime is committed, why even involve the police? For that matter why pretend that counseling a person to help them avoid trouble is equivalent in any way to criminal enforcement actions.

Maybe all we have to do is turn caring about others into a guaranteed revenue stream and capitalism will magically protect us from homegrown terrorism? I admit though, there is $omething about that idea that does not fill me with hope but I cant quite lay my hands on enough of it…

Seriously, the time to make contact with a person that has done something – or enough somethings – that might disallow them to buy a gun or bullets or even a weapon repair part is at the moment they should no longer be allowed to do such things. Waiting until they have an intention to act so want to buy a gun to tell them they cannot buy the gun does not solve any problem before creating another, potentially even more contentious problem.

I would much rather see law enforcement equipped with the truth as born by my background – and also to be required to live among those they police – than the aloof militarized militias we now see so coldly clubbing, shooting and spraying large throngs of unarmed non-violent protesters, most often youthful and frequently not white, who gather in protest of outrage at the many and blatant injustices of the time. I also note that the cops have an impressive kill rate for old men supposedly always “angry old men” and typically “barricaded” in their own homes, so I keep a low profile. Perhaps I am most frustrated to see these over-funded highly secretive militarized agencies only ever able to arrive on the scene after the evil ones among us have acted with such terrible consequence to so many innocents. Seems like gross overkill for that sort of a mop-up operation, but damn! don’t they look badass on TV.

Posted in Privacy | Leave a comment

Tails from a Diskless Hyper-V

The Amnesic Incognito Live System (Tails) is open source privacy software. Tails “helps you to use the Internet anonymously almost anywhere you go and on any computer but leave no trace…”. This post explores how a Windows private cloud or data center might leverage Tails to harden defense in depth.

Tails is a Debian Linux .iso rootkit – er I mean boot image – configured to enable peer-to-peer encryption of e-mail messages and IM messages plus The Onion Router’s (Tor) anonymous SSL web browsing. Tor’s add-ins set [java]scripting and cookies off for all web sites by default, although the user can elect to allow scripts or cookies on a per site basis. The recommended way to use Tails is to burn a verified download of the .iso on to a write-once DVD and then use that DVD as the boot device to start the computer. Tails is mostly designed and configured to leave no trace in that scenario and to assure that once the verified image is laid down on a DVD it cannot be changed. 

One limitation of this preferred scenario is that you need to reboot the machine a couple of times each time you use Tails. Once to boot and once to remove the footprint left behind in memory.

Another limitation is that a DVD drive may not be readily available or accessible when needed. Tails developers suggest an almost as secure USB alternative to the DVD, but caution that an ability to surreptitiously modify the kernel is introduced. Tails also allows the user to manually configure local storage, opening a potential security hole. Local storage is needed, for example to load cryptographic keys for the secure OTR IM and PGP email messaging apps included for peer to peer privacy. Tails does automajically configure a piece of it’s memory as a RAMdisk allowing keys to be introduced without persistence in theory.

Virtualization too, I propose, can remove the reboot overhead, however the Tails documentation cautions against running Tails as a virtual machine (VM). “The main issue,” they say, “is if the host operating system is compromised with a software keylogger or other malware.” There simply is no facility for the VM to be sure no such spyware exists on the host. The usage I am suggesting below is the inverse of that trust model. Here we will use Tails to isolate the trusted host’s Windows domain from the Internet leveraging virtualization to help preserve the integrity of the trusted node. From a practical stand point, a better rule of thumb – though still in line with the cautious Tails statement on virtualization – may be to trust a virtual environment only to the extent you trust the underlying host environment(s) that support the virtual machine.

Nov 8, 2015 note – Unfortunately, the growing concerns that Tor is compromised are legitimate:

          https://invisibler.com/tor-compromised/

http://www.idigitaltimes.com/best-alternatives-tor-12-programs-use-nsa-hackers-compromised-tor-project-376976

Also, for virtualization other than Hyper-V see this information about Tails and virtual machine security at boum.org:

   https://tails.boum.org/doc/advanced_topics/virtualization/index.en.html 

A Windows Domain in a physically secured data center implies that the Domain and the data center Ops and admin staff are trusted. But when you open ports, especially 80/443, into that Domain that trust is at increased risk. Given Hyper-V administrator rights on a Windows 2012 Server – but not while logged in with administrative system rights on the server, using Tails from a virtual machine might just be a safer, more secure and self maintaining usability enhancement for a Windows-centric data center or private cloud. 

  • Tails can eliminate many requirements that expose the Windows Domain to the Internet. Internet risks are sandbox-ed on the Linux VM. The Linux instance has no rights or access in the Domain. The Domain has no rights or access to the Linux instance other than via Hyper-V Manager. Most interestingly, Tails boots to a Virtual Machine that has no disk space allocated (other than the RAM disk already mentioned).    
  • Tails will thwart most external traffic analysis efforts by competitors and adversaries. DPI sniffers and pen register access in the outside world will only expose the fact that you have traversed the Internet via SSL traffic to the Tor Network. SSL will prevent most snooping between the VM and the onion servers. No more than a handful of governments – and a few other cartels with adequate processing power – will even have the ability to backdoor through the Certificate Authority or brute force the SSL to actually see where you are going on the Internet.     
  • The Tails developer’s take care of the security updates and other maintenance. To upgrade or patch when used in the read-only diskless Hyper-V configuration, all you need do is download the latest image file. 

Some organizations may be resistant to this idea because Tails will also allow employees to privately and anonymously communicate with the outside world while at work. True enough, the broadcast pen register from a TOR packet will simply not provide adequate packet inspect-able forensic surveillance detail to know what data center employees are up to. That alone could put the kibosh on Tails from a Diskless Hyper-V.

Organizational fear of employees not withstanding, Tails in a Windows data-center presents a robust security profile with excellent usability for those times when the knowledge available on the Internet is urgently needed to help solve a problem or help understand a configuration. I would discourage efforts to configure a backdoor to monitor actual TAILS usage from the host simply because once that back door is opened anybody can walk through. Digital back doors swing both ways: better to put your monitoring energy into making sure there is no back door.

Tails is easy to deploy as a Hyper-V VM on Windows Server 2012 (or Windows 8 Pro with the Hyper-V client):

  • download and verify the file from https://tails.boum.org. No need to burn a DVD. Hyper-V will use the .iso file, although a DVD would work too if that is preferred and will undeniably help to assure the integrity of the image. A shared copy of the iso can be used across an environment. It is necessary to ensure that the VM host computer’s management account and the user account attempting to start the VM have full access to the file share and/or file system folder of the image . 
  • add a new Virtual Machine in Hyper-V Manager.TailsVM
  1. Give the VM 512MB of memory (dynamic works as well as static)
  2. Set the BIOS boot order to start with “CD”
  3. Set the .iso file – or physical DVD drive if that option is used – as the VM DVD Drive.
  4. Configure the VM with a virtual network adapter that can get to the Internet.

May 5, 2014 note – I had to enable MAC spoofing in HyperV for the Internet Network Adapter when I use the newly released tails version 1. The checkbox is located on the Advanced Features of the Network Adapter of the VM. You will not find the Advanced Features option when accessing the Virtual Switch.  It is a setting of the Network Adapter assigned to the tails VM. I suppose another option would be to remove the MAC address hardwired into tails’ “Auto eth0” but also would reduce your anonymity. It works this way but that is all the testing I did on it! Use the hardwired MAC if possible. 

  • Start the VM and specify a password for root when prompted. You will need to recreate the root password each time you start the VM in the diskless VM configuration. It can be a different password for each re-start. Your call though you should still use a strong password (e.g. suitable hardened to meet local password policies) and change it according to policy. The degree of protection of the local Domain from the Internet is dependent upon the security of this password. You never need to know that password again after you type it twice in succession and you don’t want anyone else to know it either… ever.
  • Use the Internet privately and anonymously from your shiny new diskless Virtual Machine.TailsVMDesktop

Iceweasel can browse the Internet just fine in the diskless configuration. Using PGP or OTR, however, both require persisted certificates. That requires disk storage. Instant Messenger and pop email using the tools in Tails won’t happen unless persistent certificates are available. There are probably a number of ways certificate availability can be realized, e.g., RamDisk, network, fob, etc.

A Hyper-V Administrator cannot be prevented from configuring storage inside the Virtual Machine if storage is available to the Hyper-V. Hint: A Hyper-V Administrator can be prevented from configuring storage inside the Virtual Machine if no storage is available to the Hyper-V Administrator.

Not a total solution, but gives a very clean ability to jump on the Internet without exposing the domain to the Internet when needed.

Posted in Privacy, Secure Data | 2 Comments

For Privacy Open the Source & Close the Back Door

There is no surprise in the many recent corporate self-admissions that they too have given our private information. After all, they got us to release our privacy to their care with barely a flick and a click. As a direct consequence – and without need of oversight through lawful warrant or subpoena – Internet service providers (ISP) and tele-communications service providers are compelled to release our pen registers, profiles, email and stored files to host location authorities (e.g., local, state and federal agencies) everywhere in the world when requested. The corporations can, will and have freely, willing and routinely provided our private data, stored on their servers or clouds, upon request. And any will decipher our encrypted private data to assist such surveillance if they can. It is all done with our expressed permission.

A 2012 study I read about in The Atlantic estimates that we each would have to spend about 200 hours a year (that is 78 work days with a calculated cost of $781 Billion to GDP) to actually read all the privacy policies we have accepted. At the same time, the word count in privacy policies is going up, further reducing the likelihood that they will be read and understood. In my opinion, the so purposed design of privacy policies – to make it so easiest to accept without reading – demonstrates the Internet’s ability to coerce the user into acceptance.

“Its OK, I have nothing to hide,” you might be thinking. And to that, “It won’t hurt a thing,” is often added to those same fallacious rationalizations. That sort of thinking is continuously exposed for what it is by the stinky announcements that gigantic globs of our personally identifiable information (PII) stored on corporate servers have been leaked to the bad guys through massive and mysterious spigots lurking in some company’s data. The leaks signal the reminder that government mandated surveillance back doors in the data center (DC) and central office (CO) architectures help provide the weakened security upon which Internet hackers rely.

Thanks to the server back doors, criminals and marketers enjoy the same back door transparency without accountability as do government agents or anyone else that somehow has access through the back door. Truth be told, marketers have better back door access than government agencies in many cases. This is generally the case when you deal with any free service or web site that boasts they “do not save your data”. What they usually do is mine it as come through and distribute some part directly to a third, fourth, fifth, etc. party for harvest. Unauthorized outsiders and criminals often rely upon masquerading as an administrator, marketer or possibly a government agent at the back door.

So it is.

Back doors of any stripe undermine security. Exploiting server back doors is a common objective of marketers, sellers, executives, governments, employees, hackers, crackers, spies, cheats, crooks and criminals alike. The attraction is that there is no way for you to tell who is standing at the back door or who has ever accessed your PII data at the server. While intrusion detection and logging practices have improved over time, it lags in uptake of state-of-the-art technologies. At the same time, the talents of intruders have not only kept pace with but often are defining the state-of-the-art.

Computing back-doors are not a new phenomenon. By could by now be raising our children to fear root kits as if by instinct. Root kits are just back door knobs.

Cookies? Trojans? Worms? Other so-called malware – especially when the malware can somehow communicate with the outside world. It all fits out the back door. SQL Injection? Cross-site scripting? Man-in-the-middle attacks? Key-loggers? Just back doorways.

I need to take it one step further though. To a place where developers and administrators begin to get uncomfortable. Scripting languages (PowerShell, c-shell, CL, T-SQL, VBA, javascript, and on and on and on) combined with elevated administrative authority? All free swinging back doors.

That’s right! Today’s central offices, data centers, and by extension cloud storage services – are severely and intentionally weakened at their very foundation by mandated back doors that have been tightly coupled to the infrastructure for dubious reasons of exploitation. That’s nuts!

Whats worse? We the people – as consumers and citizens – pay the costs to maintain the very electronic back doors that allow all comers to effortlessly rob us of our earnings, identities and privacy. What suckers!

And we provide the most generous financial rewards in society to the executives – and their politicians – that champion the continuation of senselessly high risk configurations that burp out our private information to all comers. That’s dumb.

~~~~~

So, how did we get here? It started way before the PATRIOT Act or September 11, 2001. The process has served to advantaged government and – in exchange for cooperation – business with little transparent deliberation and much politically bi-partisanship. Both corporate and political access without accountability to user PII has been serviced at the switch in Signal System 7 for as long as there have been such switches and at the server for as long as there have been servers.

To wit, Mssr. A. G. Bell, and Dr. Watson I presume, incorporated AT&T in 1885.

To implicate contemporary corporate data stewards, all one need do is look at the explosion in so called “business intelligence” spending to see user data in use in ways that does not serve the interests of or, in any other way, benefit the user. Most often the purpose is to aid others to make more money. I leave it to you to decide how other’s might profit from your data.

Some act without any degree of ethical mooring. There is a driven interest, by most corporations that can afford the up front infrastructure costs, to use all the data at their disposal in every way imaginable in the quest to lift the bottom line and it is done regardless if it is a people harming virtue of capitalism. The only thing that matters to a Corporation is profit. I mean, who would ever sell cigarettes using advertising filled with sexy beach scenes and handsomely rugged cowboys but knowingly forget to mention that smoking cigarettes is one of the worst things you could ever do to yourself? This intention to mine your data for behavioral advertising purposes is one of the topics you could have read a few words about deep under that “I have read” button you magically uncovered and thoughtlessly clicked through when presented the chance to read those pesky privacy policies first. To late now…

The legislation and adjudication in opposition to government mandated communication back doors in the US can be followed back to the bootleggers during Prohibition. In 1928 the Taft Supreme Court (Hoover was the President) decided (5-4) that obtaining evidence for the apprehension and prosecution of suspects by tapping a telephone is not a violation of a suspects rights under the 4th or 5th Amendments to the US Constitution.

The Communications Act of 1934 (Roosevelt) granted oversight of consumer privacy to the newly created Federal Communications Commission (FCC).

Beginning in the 1960’s, with no real concern evident among the people, television revelations began weekly broadcasts showing how Opie’s Pop, Sheriff Andy, could listen in on your phone calls or find out who you had talked with and what you had said in past phone conversations. All he had to do was ask Sarah at the phone company.

Alas, in 1967 the Warren Supreme Court (Johnson) overruled the 1928 decision (7-1) and said the 4th Amendment does in fact entitle the individual to a “reasonable expectation of privacy.” This was widely thought to mean government agents had to obtain a search warrant before listening in on a phone conversation. However, the erosion of privacy at the confluence of surveillance and profit has since become a muddy delta.

All privacy protection during “any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photo-electronic or photo-optical system that affects interstate or foreign commerce” were revoked in the US – in a bi-partisan fashion – as the Electronic Communications Privacy Act (ECPA) of 1986 (Reagan). ECPA effectively expanded the reach of the Foreign Intelligence Surveillance Act (FISA) of 1978 (Carter) to include US Citizens: heretofore protected by the Bill of Rights from being spied upon by the US government.

No one I know had an email address in 1986. So no one cared that ECPA stripped American citizens of their email privacy. No one I know does not have an email address in 2013 (Update April 1, 2017: free email is now on life support and about to die – an abortion would have been so much better for everyone). Still, few seem alarmed that there has been no electronic privacy in the US since 1986. Judging by the popularity of the Internet-as-it-is and in the light of the unrelenting and truly awful stories of hacking resulting in travesties from identity theft to stalking to subversion of democracy coming to the fore every day, perhaps nobody even cares?

But it continues to get worse for you and I. With the Communications Assistance for Law Enforcement Act (CALEA) of 1994 (Clinton), the full burden of the costs to provision and maintain an expanded ECPA surveillance capability was thrust upon the service provider. I leave it, again, to you to decide how service providers funded the levee (hint: profits are up). Beginning explicitly with CALES, providers are now required to build data centers – and System 7 COs, cellular networks, SMS, etc. – with a guaranteed and user friendly listening ability for surveillance agents working under ECPA authority: the free swinging back door became a government mandate.

The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT ACT) of 2001 (Bush 2) removed any need to notify an individual that they had been under surveillance until and unless authorities arrest and charge that individual. The burden of electronic privacy was placed squarely on the individual. Privacy officially died. Not that things really changed all that much.

Even now agencies play games under the cover of the USA PATRIOT ACT by charging US non-citizens and holding and torturing them as desired in indefinite detention in offshore facilities sometimes, perhaps in part to avoid having to disclose methods should some matter ever come to trial? I have no way to know exactly what they are doing, but the pattern of escalating surveillance permissiveness in legislation combined with the steady leaking of heinous truths over time suggest that it is only a matter of time before the ability to hold citizens without charge becomes an effective sledge hammer methodology for agencies and, then, the local police. History is quite clear that such detainment will be used and will be used inappropriately.

Still the politicians remained unsatisfied? In 2008, FISA was amended to effectively eliminate the distinction in agency surveillance of an enemy combatant and a citizen. Now, indeed, everyone, citizen and non-citizen alike, is ‘the enemy’ through the FISA visor. FISA Amendment changes continue to ripple through ECPA, CALEA and US PATRIOT ACT regulations in an expansion of authority to a force already claimed by its bureaucratic leadership to be stretched too thin to be able to keep track of what they are doing accompanied by a decrease in already inadequate and faltering judicial oversight now made less transparent and less accountable than is necessary for an effective and democratic “rule of law”.

In 2006 and then again in 2011 the US PATRIOT ACT regulations that were supposed to expire because they would make the country safe enough in a limited time to not be needed in the future were extended… and re-extended.

Recently the NSA claimed it would violate our privacy if they secretly told even to two US Senators authorized for NSA oversight approximately how many citizens they had electronically spied on. Why is that not disturbing to most? It is worth noting that the Generals of the NSA – yes, the US Military call the shots on the privacy for all American Citizens – made it clear at that time that perhaps no-one has a way to tell who has and has not been electronically spied upon – as an alternative way to explain why they could not answer the Senators’ question.

It might be OK if privacy had been fairly traded for security, but that has not happened. Instead, the government has given our privacy to these unaccountable agencies and the terrorism continues. The police and other agencies are arriving only in time to clean up the mess, spending shit loads of the public’s money putting on a good show for the cameras, and spinning the truth about how much these laws are helping. They may be getting better at stopping the second terror attack of a particular stripe, but that is only valuable to society when the bad guys repeat a type of attack. So far, that is not happening. The agencies are being pwned big time and don’t even notice because they are too busy reading our email.

The 4th Amendment is, for all intents and purposes, null and void – unless you have a bigger gun. The 9th Amendment is now about exceptions and exclusion to rights instead of the protection of rights not named elsewhere in the Bill of Rights as the unchanged text of the amendment would suggest. If I understand correctly, even the 1st Amendment has been struck. I’m not a constitutional expert, but I am 100% positive privacy is out the window like a baby toy and now we are now too far down that road to even think about going back to find it.

Our government is now self-empowered to spy on the people, self-evidently convinced it must spy on the people and self-authorized to exterminate its own citizens without the process of law we are told is due every citizen. This is territory most inconsistent with the Constitution of the United States as I understand it and wholly unacceptable to the vast majority of the citizenry with knowledge of the matter as far as I can tell. Indeed, what people on earth should tolerate such governance?

 

Update August 22, 2015. The USA FREEDOM Act of 2015 (Obama) stirs the muddy waters of privacy but in the end is little more than a re-branding effort that hopes to squelch the post-Snowden outcry against mass surveillance.

~~~~~   

So, what can be done? Here are some guiding principles for anyone seeking to take back their online privacy. It ain’t pretty:

  1. There is no plan to delete anything. Never write, type, post or say anything [on-line] you do not want others to see, read, overhear or attribute to you. Anything you put on the Internet just may be out there forever. IBM has boasted the ability to store a bit of data in 12 atoms. Quantum data storage is just around corner. MIT suggests that Quantum computing (@ 2 bits per atom) will be in Best Buy by 2020. And search technology is making orders of magnitude larger strides than storage technology.
  2. You cannot take anything back. Accept that all the information you may have placed online at any time – and all so called ‘pen registers’ that document your interactions during placement – does not belong to you. Sadly, you may never know the extent of compromise this not-yours-but-about-you data represents until it is too late to matter. The single most important action you can take to safeguard what little is left of your privacy – from this moment forward – is to use only peer reviewed Open Source privacy enabled software when connected to the Internet and to deal only with that respect your privacy. But where are those capitalist?
  3. Stop using social web sites. There are many ways to keep track of other peoples’ birthdays. There is not much worth saying that can be properly said in one sentence or phrase and understood by boodles of others. Makes for good circus and gives people something to do when there is nothing appealing on TV but not good for communication or privacy. But combine the keywords from your clucks, demographics from your birthday reminder list and your browsing history and it is far more likely that you can be effectively ‘advertised’ into a purchase you had not planned or researched the way you likely had claimed you always do. Such behavior inducing advertising, in essence cheapens life while it makes a few people a lot of money.
  4. Avoid web sites that know who you are. Search engines and portals, like all free-to-use web sites, get their money by looking through donations and fundraising else through the back door by keeping and reselling the history. Maybe forever? This data is not generally encrypted, nor even considered your data (oops, there goes that pesky Privacy Policy again).  Nonetheless, anyone that can hack into this not-your-data has the information needed to recreate your search history and, in all likely hood, to identify you if so desired. Corporate data aggregations and archives, so-called data warehouses – often leave related data available for business analysts, developers, network engineers, and any sneaks who might find a way to impersonate those behind the scenes insiders, through a nicely prepared user interface that can drill-down from the highest aggregations (e.g. annual corporate sales or population census data) to the actions and details of an individual in a few clicks. Once ordered, organized, indexed and massaged by high powered computers, this data remains ready for quick searching and available in perpetuity.  Protect your browsing history and searches from as much analysis as possible – a favorite pen register classed surveillance freebie for governments (foreign & domestic), marketers, and criminals alike. One slightly brutal way might be to surf only from freely accessible public terminals and never sign-in to an online account while surfing from that terminal. An easier and open source but still more work than not caring way may be to hit Tor’s onion servers using FireFox and Orbot from your Android device or the Tor browser bundle from your Linux desktop or thumbdrive. (We have no way to know if the Windows or Mac desktop are backdoor-ed. ). You could even combine the two approaches with tails – assuming you can even find a public kiosk or Internet Cafe that will let you boot to tails. A VPN from home would work well too, if you can be certain the VPN provider holds your interest and privacy above more profits.
  5. Use only open source software that you trust  Avoid all computer use, especially when connected to the Internet, while logged in with administrator or root authority. Particularly avoid connections to the Internet while logged in with the administrator or root credentials. Avoid software that requires a rooted smartphone or a local administrator login during use.
  6. adopt peer-to-peer public key cryptography 
    1. securely and safely exchange public keys In order to have confidence in the integrity of the privacy envelope of your communications and exchanges with others.
    2. exchange only p2p encrypted emails Never store your messages, even if encrypted by you, on a mail server else you forgo your right to privacy by default. I think US law actually says something like when your email is stored on somebody else’s mail server it belongs to that somebody else not to you. Even Outlook would be better, but Thunderbird with the Enigma OpenPGP add-in is a proven option for PGP encryption using any POP account. The hard part will be re-learning to take responsibility for your own email after becoming accustomed to unlimited public storage (and unfettered back door access). It will also become your responsibility to educate your friends and family about the risks to convince them to use peer-to-peer public key cryptography and secure behaviors too. Until then your private communications to those people will continue to leak out no matter what you do to protect yourself.
    3. exchange only p2p encrypted messages For SMS text, investigate TextSecure from Open WisperSystems. I don’t have a suggestion for SMS on the desktop.  For other messaging check out GibberBot that connects you through the Tor network on your Android device. If used by all parties to the chat, this approach will obfuscate some of your pen registers at the DC and all of your message text. Installing Jitsi adds peer-to-peer cryptography to most popular desktop Instant Messaging clients. Jitsi does not close any back doors or other vulnerabilities in IM software. Your pen registers will still be available at the server and attributable to you but your private information will only be exposed as encrypted jibberish. Using the onion servers with Jitsi or GibberBot will help obfuscate your machine specific metadata but the IM server will still know it is your account sending the message. Security experts seem convinced that Apples loudly advertises the iMessage back-door: http://blog.cryptographyengineering.com/2012/08/dear-apple-please-set-imessage-free.html
    4. exchange p2p encrypted files If you get A. right, this will be a breeze.
    5. exchange p2p encrypted SMS messages else avoid SMS.  I had briefly used TextSecure from Open WisperSystems on Android 4x. I don’t have a secure tested Windows or Linux desktop suggestion for SMS.
    6. exchange p2p encrypted voice communications While web phone Session Initiation Protocol (SIP) providers are subject to the same pen and tap logging rules as all other phone technologies. The biggest practical differences between SIP and good old System 7 or cellular switching is the open source software availability and throughput potential. With SIP several open source apps are available now built upon Zimmerman’s Realtime Transport Protocol (ZRPT) for peer-to-peer encryption of SIP-to-SIP multimedia capable conversations. I know Jitsi includes ZRPT by default for all SIP accounts registered. When a call is connected the call is encrypted, but ONLY if the other party to the call is also using a ZRPT peer. tforo
  7. avoid trackers, web bugs and beacon cookies Cookies are tiny files. They are an invaluable enhancement for user experience that don’t get wiped from your machine when you leave the page that dropped the cookie. Cookies have become impossible to manage manually because there are so many and because many cookie bakers try to make it difficult for you to determine the ingredients of their cookies on your machine that fills with your data. That is so creepy. What could go wrong? But tracker cookies are worse than most They keep collecting your data even after you leave the baker’s web site and disconnect from the Internet. Then, every chance they get when you next connect to the Internet, these cookies will gather more data from you and ever so slyly transmit your data out to their death star . Lots of trackers come in or as advertisements, though most are simply invisible to the unaware user. One classic beacon cookie is a picture file with no image, just a tracked data collector, yet done in a way that convinces most (all?) tracker detectors of it’s innocence. However, it would be foolish to characterize trackers as ever built one way or another. The design goal is and will always be to not look like a tracker. In today’s world, I believe it safe say that the tracker builders continue to have an easy time of it toand are one giant step ahead of the tracker trackers. I have AdBlock Plus, Ghostery Disconnect and the EFF’s Privacy Badger running on the desktop browser I am using to edit this old page. AdBlock Plus finds 4 adds but blocking has to be off or I am not able to edit the blog post. Ghostery finds 3 web bugs and Privacy Badger identifies 7 trackers or for this page.  Disconnect finds 27 request for my data from a variety of sources. Privacy Badger sees 16 potential trackers and blocks all but three. The thing is Disconnect has 19 request sorted under a ‘content’ category that are not blocked and if I try blocking any of them, the free WordPress weblog breaks,  Google and twitter both seem to have trackers on me that neither Privacy Badger or Disconnect blocks by default by default on my browser. Could be I made the wrong choice on some google policy 11 years ago, could be I was drunk the other day and clicked on accept so I could get to the porn faster, or could be Google or WordPress imposes this unblocked condition. Could even be that they are benign cookies and my tracker trackers know it, though I mostly doubt any scenario other than they seek to send my data back to the server.

As you can see, effective tools are not really available to protect on-line privacy short of end-to-end encryption. What’s more, the bad guys are already using all the tools to keep them undetected! The challenge for us is a human behavioral issue that ultimately demands little more than awareness of what is happening around your and a willingness to cooperate in a community of other’s in search of privacy. Could be that cooperation alone is the overpowering impediment in these polarized times. Oddly, most find it easier to trust Google and Facebook than to trust the people they know. Only if everyone in the communication values privacy and respects one another enough to move together to a peer-to-peer public key cryptography model using widely accepted and continuously peer reviewed software can that software hope to find a satisfactory digital privacy.

We must start somewhere.

I repeat, the bad guys made the changes long ago so your resistance serves only your demise and the ability of others to profit from your data until that time.

Sadly, I’m not at all sure how to convince anyone that spends time on Facebook, Twitter and that lot not flock toward the loudest bell. You’ll are throwing your privacy to the wolves. With each catastrophe perpetrated by the very bad guys that the rape of our privacy was supposed to protect, the media immediately and loudly lauds the police and other agencies for doing such a great job and proclaims how lucky we are to have them freely spying upon our most personal matters. The agencies, for their part, continue to bungle real case after real case yet maintain crafty bureaucratic spokespeople to pluck a verbal victory from the hind flanks of each shameful defeat of our privacy. Turns out the agencies don’t even use the pen registers and tap access for surveillance they claim to be crucial. Instead it is a helper when sweeping up the mess left behind by the last bad guy that duped them. Why are the agencies not preventing the horrible events as was falsely promised during the effort to legitimize their attacks on our personal privacy?

For genuine privacy, all people in the conversation must confidently and competently employ compatible p2p cryptography. That’s all there is to it. Until folks once again discover the fundamental value in private communications and public official transparency, public accountability is beyond reach.. and your privacy and my privacy will remain dangerously vulnerable.

Posted in Code Review, Privacy, Secure Data | Leave a comment