A quickie today on leveraging “the cloud” for warm-ish spare servers.

I run a mix of physical and cloud based servers. The Cloud is convenence, however, in general, I prefer physical servers for lower cost (over time anyway) and greater control. Of course that means having dependency on hardware, upstream connectivity, data center power, etc.

I sometime hedge my bets by keeping a backup copy of the server in AWS. Of course, the above “lower cost” would be lost if I kept that server running all the time. But, stopped EC2 instances are so cheap as to be free, you just pay for storage.

So, what I want is something that spins up the instance, syncs data to it, and then shuts it down again. Turns out this is pretty straight forward with the AWS CLI.

All we need to start an instance is it’s ID, which we use with the ec2 start-instances command:

1
aws ec2 start-instances --instance-ids $INSTANCE_ID

Instance take time to boot, of course, and we can’t start our sync until it’s up. Fortunately, the AWS CLI provides for that with the ec2 wait instance-status-ok command:

1
aws ec2 wait instance-status-ok --instance-ids $INSTANCE_ID

As the name implies, this waits until the instance’s status is OK, polling every 15 seconds. It will pause our script until the instance is up. OK status is tied to health checks and seems to sync up with SSH being available. If you run into issues with the server becoming OK before you can access it, you could explicitly wait for SSH instead:

1
until nc -z $INSTANCE_IP 22 >&/dev/null; do sleep 15 ; done

(Note that here we need the instance’s IP address, not it’s ID.)

Once you’ve done the sync you’ll want to stop the server, which is as easy as:

1
aws ec2 stop-instances --instance-ids $INSTANCE_ID

Unsurprisingly, AWS doesn’t have a “wait for not OK” function. I simply assume that the instance will shutdown. If it concerns you, you could poll with describe-instance-status until it’s down, tho that would require some JSON parsing or greping.

Put all together, it might look something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/sh

test $# -ne 0 || { echo "Usage: $0 instance-id"; exit 1; }

INSTANCE_ID=$1

AWS='/usr/local/bin/aws'

echo "Staring instance $INSTANCE_ID"
$AWS ec2 start-instances --instance-ids $INSTANCE_ID > /dev/null
echo "Waiting for $INSTANCE_ID to be OK"
$AWS ec2 wait instance-status-ok --instance-ids $INSTANCE_ID
# Sync with "rsync" or the tool of your choice here.
echo "Shutting down $INSTANCE_ID"
aws ec2 stop-instances --instance-ids $INSTANCE_ID > /dev/null

(stop and start output some JSON that we don’t need for this application.)

So, that will do the trick, but before I go, let’s talk about security. For this to work, the server you are backing up needs to have IAM credentials stored on it. Of course, you maintain iron clad security on your servers and they will never, ever be compromised. However, in the unlikely even it is, you want very limited permissions on those credentials. Otherwise, before you can “botnet”, you’ll be running one.

Using https://awspolicygen.s3.amazonaws.com/policygen.html set up a IAM Policy for EC2 that only allows DescribeInstanceStatus, StartInstances, and StopInstances, and only to the instance(s) you want to sync with. That way if the source server is compromised, the most grief they can give you is stopping and starting your backup servers.

So, best of both worlds, servers you control and spares in the magical cloud.

Comments