I was recently shown the Best. Chrome. Extension. Ever.

JSONView is a Chrome extension that formats JSON when opened in Chrome. Normally when you hit a URL that returns JSON, Chrome displays a pile of goo. Actually, it just display the JSON as is, but most rendered JSON is not well formatted (and doesn’t need to be, it’s not intended for humans).

JSONView cleans up the formatting and applies syntax highlighting. Better still, it displays any errors it finds while parsing the JSON. If you are developing an API that sends JSON, it’s a great way to make sure what you are sending is valid and, when it’s not, to debug the problem.

It seems like everyone knew about this before me, but if you don’t, you just might want to add your tool box.

Recently (I seem to start a lot of posts with “Recently”), I was on the road needed to access a server that was behind a firewall. There was no VPN and access was limited to a small set of IPs. I could however access another server in that set of IPs. That would let me bounce through for SSH access, but really I needed access from my laptop.

rsync will happy copy files between servers and will keep the ownership and permissions the same. However, if you aren’t the owner of all of the files then ownership sync requires the rsync on the receiving end (which we’ll call server B) to be running as root. Likewise on the sending server (server A) if we don’t own the files we might not be able to read them and again need to be running as root.

root on server A is easy, we can just use sudo:

1
sudo rsync -av /var/www server-b.example:/var

root on server B requires a little more finesse, we need to override the remote rsync command to include sudo, this is done with the --rsync-path path option:

1
sudo rsync -av --rsync-path="/usr/bin/sudo /usr/bin/rsync" /var/www server-b.example:/var

(This presumes you can run sudo without a password on server B, things get a little hairy if you can’t.)

Great. Done. But, what if you can’t log into server B with a password?

I’m embarking on a long, long over due project to organize my dotfiles, you know, all those files in your home directory that start with “.” and configure your software. (If you don’t then this isn’t the post or the blog for you.)

There are lots of schemes for storing and distributing dotfiles, and I’ll get to that. But first, I need to clean up the mess of living in the UNIX shell for 30 years. Really.

My goals are twofold: 1) Any time I spin up a new server, I want to be able to deploy my preferred configuration out to it. 2) I want a complete copy of my local machine’s config setting to make it (relatively) simple to setup on a new machine.

These goals conflict.

In my occasional series on waiting for things, I setup a BASH function to wait for AWS CloudFront invalidations. I mentioned it would be possible to invalid and wait in the same function, but was feeling lazy and left it to the reader. Well, I’ve gotten tired of the two step process, so I’m being industrious so I can be lazy.

Here’s about as esoteric a post as I ever write, my love of pushd and it’s little abused directory stack. If you don’t live on the command line, move along, there is nothing to see here.

If you have a lot of SSH keys loaded you may run into the dreaded:

1
Received disconnect from 10.10.10.10: 2: Too many authentication failures for spike

This happens because the SSH client tries each key in order, until it finds one that works. The SSH server allows only so many authentication attempts before kicking the client to the curb (default 6, controlled by the MaxAuthTries setting). Fortunately, there’s a fix.

A quickie today on leveraging “the cloud” for warm-ish spare servers.

I run a mix of physical and cloud based servers. The Cloud is convenence, however, in general, I prefer physical servers for lower cost (over time anyway) and greater control. Of course that means having dependency on hardware, upstream connectivity, data center power, etc.

I sometime hedge my bets by keeping a backup copy of the server in AWS.