Recently (I seem to start a lot of posts with “Recently”), I was on the road needed to access a server that was behind a firewall. There was no VPN and access was limited to a small set of IPs. I could however access another server in that set of IPs. That would let me bounce through for SSH access, but really I needed access from my laptop.
A post or two back, I looked at having BASH detect if I was on my “desktop” (for lack of a better word) or a server and decided the best approach was to hard-code that fact. I casually tossed the configuration in my .bashrc. Was that the right place?
rsync will happy copy files between
servers and will keep the ownership and permissions the same. However,
if you aren’t the owner of all of the files then ownership sync
rsync on the receiving end (which we’ll call server
B) to be running as root. Likewise on the sending server (server
A) if we don’t own the files we might not be able to read them and
again need to be running as root.
root on server A is easy, we can just use sudo:
root on server B requires a little more finesse, we need to
override the remote
rsync command to include
sudo, this is done
--rsync-path path option:
(This presumes you can run
sudo without a password on server B,
things get a little hairy if you can’t.)
Great. Done. But, what if you can’t log into server B with a password?
I’m embarking on a long, long over due project to organize my dotfiles, you know, all those files in your home directory that start with “.” and configure your software. (If you don’t then this isn’t the post or the blog for you.)
There are lots of schemes for storing and distributing dotfiles, and I’ll get to that. But first, I need to clean up the mess of living in the UNIX shell for 30 years. Really.
My goals are twofold: 1) Any time I spin up a new server, I want to be able to deploy my preferred configuration out to it. 2) I want a complete copy of my local machine’s config setting to make it (relatively) simple to setup on a new machine.
These goals conflict.
In my occasional series on waiting for things, I setup a BASH function to wait for AWS CloudFront invalidations. I mentioned it would be possible to invalid and wait in the same function, but was feeling lazy and left it to the reader. Well, I’ve gotten tired of the two step process, so I’m being industrious so I can be lazy.
Here’s about as esoteric a post as I ever write, my love of
and it’s little abused directory stack. If you don’t live on the
command line, move along, there is nothing to see here.
If you have a lot of SSH keys loaded you may run into the dreaded:
This happens because the SSH client tries each key in order, until it finds one that works. The SSH server allows only so many authentication attempts before kicking the client to the curb (default 6, controlled by the MaxAuthTries setting). Fortunately, there’s a fix.
It’s surprisingly hard to find info on getting the MIME type of a file in Rails.
A quickie today on leveraging “the cloud” for warm-ish spare servers.
I run a mix of physical and cloud based servers. The Cloud is convenence, however, in general, I prefer physical servers for lower cost (over time anyway) and greater control. Of course that means having dependency on hardware, upstream connectivity, data center power, etc.
I sometime hedge my bets by keeping a backup copy of the server in AWS.
So, first I talked my first programming experience, learning via type-in programs. Then I looked at the shotgun approach my university took. In this final post of the series, I’m going to talk about what was the most important learning experience on my path to becoming a developer.