Solid Queue, along with Solid Cache, a similar database backed cache, and a database backed Action Cable adapter are all slated to become the default in Rails 8.
So far, Solid Queue is working well for me.
To use Solid Queue, install the gem:
bundle add solid_queue
Then, use its installer to create the necessary tables and configure it as Active Job’s production adapter:
bin/rails generate solid_queue:install
If you want to use Solid Queue in development, you will need to enable it in your config/environments/development.rb
:
module MyApp
class Application < Rails::Application
# [...]
config.active_job.queue_adapter = :solid_queue
end
end
Finally, you need to run the migrations to create the tables:
bin/rails db:migrate
Once Solid Queue is installed and configured, Active Job will transparently use it. However, as with other Active Job backends, Solid Queue requires a worker process to pull jobs off the queue. Let’s look at some options for running that process.
The Rails development environment defaults to the Active Job Async backend. This is fine for development and doesn’t require any addition infrastructure. However, I prefer that my development environment mirrors production as closely as possible.
There are several ways to run the worker process locally.
Out of the box, you can open a new tab/window and run:
bundle exec rake solid_queue:start
Alternatively, you can install the foreman
gem and create a Procfile.dev
with:
web: env RUBY_DEBUG_OPEN=lazy bin/rails server -p 3000
background_jobs: bin/rails solid_queue:start
Then you can run
foreman start -f Procfile.dev
If you have installed the
tailwindcss-rails it has already
set up a Procfile.dev and added a script ./bin/dev
that uses foreman
to start Rails and Tailwind. In that case, use can just add background_jobs: bin/rails solid_queue:start
to Procfile.dev
My preference for local development is to use Docker Compose. This way everything my application needs can be self contained. Setting up a Docker Compose environment is beyond the scope of this post (and a post I really need to write) For now, I have a README, but if you already have it setup, just copy your Rails service and change the command. It should look something like:
services:
# [...]
solid_queue:
build: .
stdin_open: true
tty: true
command: bundle exec rake solid_queue:start
depends_on:
- database
environment:
- DATABASE_URL=postgresql://postgres@database
volumes:
- .:/usr/src/app
- gem_cache:/gems
If a Procfile exists in your repo, Heroku will use to configure your dynos. Create the following Procfile:
web: bin/rails server -p ${PORT:-5000} -e $RAILS_ENV
background_jobs: bin/rails solid_queue:start
On your next deploy, a new dyno labeled “background_jobs” will appear. You will need to enable (and pay) for it.
I think Solid Queue is a great option for Active Job. It is easy to setup and use and it simplifies your infrastructure. I am looking forward to seeing it become the default.
]]>Obviously, you’ll need to have Docker installed.
Presuming you have a directory with your PHP scripts and any supporting HTML,
CSS, assets, etc, you’ll need a docker-compose.yml
file and possibly a Dockerfile
.
Create the following docker-compose.yml
file:
version: "3.8"
services:
php:
image: php:7.2-apache
stdin_open: true
tty: true
ports:
- "80:80"
volumes:
- .:/var/www/html/
Run docker-compose up
. That’s it, you can now connect to http://localhost/, if
your entry point is index.php
or index.html
. Otherwise, you can connect to
http://localhost/filename.php (or filename.html if your working with a
form). Edit, test, repeat. When you’re done, just hit Control-C.
The PHP Docker image ships with a fair number of extensions preinstalled and enabled. You can see the out of the box configuration with:
docker-compose run php php -i
(The first php is the name of the service, the second is the start of the command.)
However, some extensions, typically ones that require additional libraries are not
installed by default. In this case, you’ll need a Dockerfile
to install
dependencies and enabled the extension. For example, let’s install the PHP GD Image
Processing extension.
Dockerfile:
FROM php:7.2-apache
RUN apt-get update && apt-get -y install libjpeg-dev libpng-dev zlib1g-dev git zip
RUN docker-php-ext-configure gd \
--with-png-dir=/usr/include \
--with-jpeg-dir=/usr/include \
&& docker-php-ext-install gd \
&& docker-php-ext-enable gd
The Dockerfile does the following:
FROM php:7.2-apache
- start with the PHP 7.2 image, anything after will be layered
on that.RUN apt-get update && apt-get -y install libjpeg-dev libpng-dev zlib1g-dev git zip
- install the required dependencies to build the GD extension.RUN docker-php-ext-configure gd [...]
- Configure, build, and enable the GD extension.Obviously, your dependencies and the arguments to docker-php-ext-*
will vary with the extension you are installing, but a little searching should
fine you the right invocation.
Once the Dockerfile
is setup, update docker-compose.yml
to us it by changing:
image: php:7.2-apache
to:
build: .
Now run docker-compose build
followed by docker-compose up
. The
build step is only needed when the Dockerfile
changes.
Composer is a PHP dependency manager for installing packages. For example, I’ve
used it to install PHPMailer. Again, you will need a Dockerfile
:
FROM php:7.2-apache
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
COPY . /var/www/html
WORKDIR /var/www/html
RUN composer require phpmailer/phpmailer
ENV PATH="~/.composer/vendor/bin:./vendor/bin:${PATH}"
Walking through this:
FROM php:7.2-apache
- again, start with the PHP 7.2 image an layer on top of that.COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
install Composer from the composer Docker image.COPY . /var/www/html
and WORKDIR /var/www/html
- copy all of our files to
/var/www/html and, in effect, cd
there. This insures Composer will install
packages in the right place and find it’s config, if any.RUN composer require phpmailer/phpmailer
- Install the package.ENV PATH="~/.composer/vendor/bin:./vendor/bin:${PATH}"
- Insure that any
executables installed by Composer will be found.Composer can optionally read a composer.json
for a list of dependencies. If
you have one of those, you would call composer install
instead of using the
require command above.
Again docker-compose build
and docker-compose up
get you going.
If you need to enable extensions and install packages, simply put both in the
Dockerfile
FROM php:7.2-apache
RUN apt-get update && apt-get -y install libjpeg-dev libpng-dev zlib1g-dev libfreetype6-dev git zip
RUN docker-php-ext-configure gd \
--with-png-dir=/usr/include \
--with-jpeg-dir=/usr/include \
&& docker-php-ext-install gd \
&& docker-php-ext-enable gd
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
COPY . /var/www/html
WORKDIR /var/www/html
RUN composer require phpmailer/phpmailer
ENV PATH="~/.composer/vendor/bin:./vendor/bin:${PATH}"
Once you need databases or other services, you’ve gone beyond a small
favor. Still, it happens… You’ll need to add the additional infrastructure to
the docker-compose.yml
file. Here’s an example for PostgreSQL and Redis:
version: "3.8"
services:
php:
image: php:7.2-apache
stdin_open: true
environment:
- DATABASE_URL=postgres://postgres@database
- REDIS_URL=redis://redis:6379/1
tty: true
ports:
- "80:80"
volumes:
- .:/var/www/html/
database:
image: postgres
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- db_data:/var/lib/postgresql/data
redis:
image: redis
volumes:
db_data:
This sets up Docker containers running Postgres and Redis which can be accessed using the hostnames postgres and redis respectively. Using the environment section of the PHP host, I’ve injected the connection strings. How you’d use/configure them is, of course, totally dependent on your code.
This example doesn’t use a Dockerfile
, but it could. Doing so is left as an
exercise to the reader.
Finally, a few handy commands to help you get by.
docker-compose ps
- See what’s runningdocker-compose stop
- Shut it down. Same as hitting Control-C in the window
you ran start in.docker-compose exec php bash
- Get a shell on the running PHP container.docker-compose run php bash
- Start a PHP container and get a
shell. Shuts down when you exit.Hopefully, this will make it a little easier when you want to help a friend with their PHP adventures!
]]>Building Emacs has only gotten easier since macOS Catalina, so, let’s make this quick.
The easiest way to get everything, save Xcode (which you already have) is with Homebrew. In fact, the Emacs 27.1 build process now plays nicely with Homebrew, making this my preferred method.
brew install autoconf automake gnutls
git clone git://git.savannah.gnu.org/emacs.git
cd emacs
git checkout emacs-27
(master is the development branch, emacs-27 is the current released version.)
The version 27.1 build process knows about Homebrew and where it stores
stuff. In the past you’d have to add the path to makeinfo
(“/usr/local/opt/texinfo/bin”) to your PATH, but this is no longer necessary.
Emacs 26 and possibly early versions of 27 also had problems finding Libxml2 but this has been corrected as well.
In short, as long as you have installed the prerequisites with Homebrew, you no longer need to do anything to your environment.
make configure
./configure --with-ns
make install
Note, make install
builds the macOS app bundle, it doesn’t actually install anything.
make install
builds nextstep/Emacs.app
. Test it with:
open nextstep/Emacs.app
Then reveal it in the Finder:
open -R nextstep/Emacs.app
drag it to the Applications folder. Done.
]]>Fortunately, there’s a simple fix.
All you need to do is add:
AddKeysToAgent yes
to your .ssh/config
. Just it as says on the label, keys are
automatically added to the SSH Agent when they are used. You’ll be
prompted for the password on its first use, no need to add it
separately. This works not only with ssh
itself, but with things
like Git, that use SSH under the hood. Boom!
However, there’s another format I run into all the time, the format Github uses in URLs when linking to a specific line number:
https://github.com/someorg/somerepo/blob/somebranch/app/controllers/application_controller.rb#L38
Here, the line number is in the form ‘#LNN’, which makes total sense as a URL fragment identifier. I get a lot of these URL in chat when discussing problems and I decided to add support for them to my function.
The old version was:
function ec () {
if [[ $1 =~ (.*):([0-9]+)$ ]]; then
$EMACSCLIENT -c -n "+${BASH_REMATCH[2]}" "${BASH_REMATCH[1]}"
else
$EMACSCLIENT -c -n "$@"
fi
}
then new version is:
function ec () {
if [[ $1 =~ (.*)[:#]L?([0-9]+)$ ]]; then
$EMACSCLIENT -c -n "+${BASH_REMATCH[2]}" "${BASH_REMATCH[1]}"
else
$EMACSCLIENT -c -n "$@"
fi
}
The only change is to the regular expression (.*)[:#]L?([0-9]+)$
,
which says “capture everything up to : or # option followed by
“L” and then followed by nothing by numbers to the end of line, which
should be captured as well”.
Technically, this regexp would allow the format filename:LNN but that doesn’t keep me up at night.
I typically just copy the text to the right of the branch name in the Github URL, but a clever reader could easily modify the function to take the full URL and just grab the useful bits.
]]>Ironically, I started this post in vi
. Why? I’d updated brew and a
change in the ImageMagik version broke Emacs.
The complexity of this exercise waxes and wanes as the dependencies of
Emacs change and as the versions of tools Apple ships get old or get
updated. This time around, there is one new requirement.
Emacs has replaced OpenSSL with GnuTLS for
making SSL connections, you’ll want to install it. With Homebrew is
brew install gnutls
. If you use another package manager, you know
how to work it. Building that from scratch is left as an exercise for
the reader.
Then there are the things you’ve needed for a while now.
Xcode free in the Mac App Store. If you don’t have this already why are you trying to compile Emacs?
Autoconf and
Automake. The easiest way to
install is Homebrew via brew install
autoconf automake
. If you use another package manager, you know how
to work it. If you are all-in on building from source, check out the
above guide
for details.
makeinfo
(part of the
Texinfo suite). Apple ships
makeinfo
, but at some point, the system version fell below the
minimum version Emacs needs to build. Once again, Homebrew is the
easiest way with brew install texinfo
.
However you get makeinfo
, make sure that the path to your new
version comes before /usr/bin where Apple has installed theirs. For
the Homebrew version you’d want:
export PATH="/usr/local/opt/texinfo/bin:$PATH"
Now that you have what you need, grab the source:
git clone git://git.savannah.gnu.org/emacs.git
cd emacs
Checkout the emacs-26
branch (master is the development branch):
git checkout emacs-26
One last bit of housekeeping. Emacs 26 uses Libxml2 to parse XML. The library is included with Xcode, but the Emacs build process can’t find it. Give it a hint with:
export LIBXML2_CFLAGS=`xml2-config --cflags`
export LIBXML2_LIBS=`xml2-config --libs`
Now you’re ready to configure and build:
make configure
./configure --with-ns
make install
As always, the make install
step doesn’t actually install anything,
it’s used to create the App bundle after building Emacs.
Test it:
open nextstep/Emacs.app
If it looks good, reveal it in the Finder
open -R nextstep/Emacs.app
drag Emacs
to the Applications folder and you’re good to go!
git commit
open a new frame in my existing session. I also had a simple Bash alias that let me open any file in a frame as well:
alias ec="$EMACSCLIENT -c -n"
Where $EMACSCLIENT
is the path to emacsclient
on whatever system I’m on, -c
is to create a new frame for the file, and -n
to tell emacsclient
to exit immediately instead of waiting for me to finish editing.
This has served me well for years. However, there’s one bit of laziness I’ve been wanting to implement. Handling filenames (or file paths) that include line numbers, i.e. filename.rb:123. It’s common for error messages to be spit out in the form of:
./lib/something/somefile.rb:126:in `some_method'
Likewise, test frameworks like rspec will report failing tests in the form of:
./spec/something/somefile_spec.rb:152
If I want to look at that error, I need to select the filename without the line number, paste it to open the file and then go to line 126. Being lazy, I wanted a way to do that in one step. Fortunately, EmacsClient supports opening a file at a given line by adding “+linenumber” to the command line:
emacsclient -c -n +126 lib/something/somefile.rb
I can take advantage of this by replacing my ec
alias with a
function and adding a bit of smarts.
function ec () {
if [[ $1 =~ (.*):([0-9]+)$ ]]; then
$EMACSCLIENT -c -n "+${BASH_REMATCH[2]}" "${BASH_REMATCH[1]}"
else
$EMACSCLIENT -c -n "$@"
fi
}
The if
statement uses a Bash Regular Expression to test if the file
path ends in a colon followed by numbers. If so, it captures the file
path in $BASH_REMATCH[1]
and the line number in $BASH_REMATCH[2]
($BASH_REMATCH[0]
is the everything that matched without regard to
the parentheses). It then adds the +
option to the command line and
follows that with the value of $BASH_REMATCH[2]
and uses
$BASH_REMATCH[1]
for the file path.
If there isn’t a match, it simply uses the argument for the filename, as it always has.
Laziness and Bash functions for the win!
]]>brew
install autoconf automake
. If you use another package manager, you
know how to work it. If you are all-in on building from source, check
out the
above guide
for details.makeinfo
(part of the
Texinfo suite). Apple
ships makeinfo
, but at some point the system version fell below
the minimum version Emacs needs to build.makeinfo
can also be installed from Homebrew:
brew install texinfo
But before building Emacs, you need to get it into your $PATH ahead of /usr/bin/makeinfo
export PATH="/usr/local/opt/texinfo/bin:$PATH"
If you want to build Texinfo from source, you can:
cd /tmp
curl -O https://ftp.gnu.org/gnu/texinfo/texinfo-6.5.tar.gz
tar xf texinfo-6.5.tar.gz
cd texinfo-6.5
./configure
make
sudo make install
This installs into /usr/local/bin
so make sure that’s in your $PATH
ahead of /usr/bin
Once you have the prerequisites squared away, the build is the same as it’s been for a while. Get the source:
git clone git://git.savannah.gnu.org/emacs.git
cd emacs
Checkout the emacs-25
branch (master is the development branch):
git checkout emacs-25
Configure and compile (make install
build the application bundle, it
doesn’t actually install anything):
make configure
./configure --with-ns
make install
Test:
open nextstep/Emacs.app
And, if it looks good, install it by revealing it in the Finder:
open -R nextstep/Emacs.app
and dragging Emacs
to the Applications folder.
This builds Emacs 25.3. There are no major new features from 25.1, but there are some important bug and security fixes.
Enjoy your new lightsaber!
]]>rvm install ruby-1.9.3-p551
, it
will barf with compiler errors. The workaround, found
here, is to use:
rvm install ruby-1.9.3-p551 --with-gcc=gcc
You will get a warning:
Ruby ‘ruby-1.9.3-p551’ was built using clang - but it’s not (fully) supported, expect errors.
However, I have run into any errors.
Yes, 1.9.3 is not supported and, in a perfect world, I would be able to upgrade projects using it.
No, I don’t care if you hate RVM.
]]>brew
updating the version of
libjpeg
. My
walk-through almost
works, however the Emacs build now depends on a newer version of
makeinfo
(part of the
texinfo suite) than the one
that ships with Sierra.
To work around this, install the latest texinfo with brew:
brew install texinfo
Then, before running make configure
add it’s bin directory to
the front of your path:
export PATH="/usr/local/opt/texinfo/bin:$PATH"
Then build as per the [original article]((/2016/09/28/building-emacs-on-macos-sierra/).
You could make the PATH setting permanent by adding in to your .bashrc/.bash_profile, but I never use it for anything other than building Emacs.
See ya again when Emacs 26 or macOS 10.13 comes out.
]]>rvm use 2.4.1@project-name --create
gem install rails
rails new project-name
cd project-name
rvm use 2.4.1@project-name --ruby-version
echo '/.ruby-*' >> .gitignore
The only change is bumping Ruby to 2.4.1.
But this leads to a rant:
When I posted the original version, some troll commented something along the lines of:
“You lost me when you said rvm.”
Dude (of any, all, or no gender), you are an idiot. Not because you don’t like RVM, but because it doesn’t matter that you don’t like RVM and you are wasting your time hating on it.
I’m not going to defend RVM because:
a. You can google and find plenty of people explaining why RVM is wrong and plenty of other people explaining why people who think RVM is wrong are wrong.
b. I don’t care. I don’t care for the simple reason RVM works for me. I use RVM because that it was the only option at the time. I have nothing against rbenv, I’ve used it on servers. However, I’m not going to spend a day ripping out something that’s working fine for me to replace it with something that’s technically, but not actually functionally, better for me. Use whatever you like, but don’t be a jerk by telling people their tools are somehow inferior to yours.
Back to the task at hand. I edit the Gemfile, first adding:
ruby '2.4.1'
to the top allowing Heroku and RVM to automatically pickup the Ruby version (rbenv doesn’t do this out of the box, but there is a plugin).
Then I move:
# Use sqlite3 as the database for Active Record
gem 'sqlite3'
in to the test and development group:
group :development, :test do
# Use sqlite3 as the database for Active Record in development
gem 'sqlite3'
# [...]
end
sqlite3
is fine, preferable even, for development, but in production
you need something more rubust. I create a new production group:
group :production do
# Use PostgreSQL as the database for Active Record
gem 'pg' # or gem 'mysql2'
end
I add two development gems I like:
group :development, :test do
[...]
gem 'dotenv-rails'
gem 'awesome_print'
The dotenv
gem allows you to manage project specific environmental
variables by storing them in a .env
file, which we need to ignore:
echo '/.env' >> .gitignore
Awesome Print is a “ruby object pretty printer”. It add an ap
method
which nicely formats and syntax highlights ruby objects.
You can read more about dotenv
and Awesome Print in the
original post.
The change here is I’ve stopped using Pry via
pry-byebug
, I’ve become accustom to
Byebug which ships with Rails.
Finally, if it’s a user facing project, as opposed to an API, and it doesn’t have it’s own styles, I bring in Bootstrap:
gem 'bootstrap-sass'
and replace the contents of app/views/layouts/application.html.erb
with:
<!DOCTYPE html>
<html>
<head>
<title>My Cool App</title>
<%= csrf_meta_tags %>
<%= stylesheet_link_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %>
<%= javascript_include_tag 'application', 'data-turbolinks-track': 'reload' %>
</head>
<body>
<div id="wrapper" class="container-fluid">
<%= yield %>
</div>
</body>
</html>
And then it’s just:
git add .
git commit -m 'Blank Slate'
Starting with Rails 5.1.1, rails new
does a git init
unless you
use the --skip-git
option, so that step is no longer necessary.
As always, this is what works for me, but I think it’s a good starting point to think about what works for you.
]]>pmset -g assertions
2017-05-09 16:15:57 -0600
Assertion status system-wide:
BackgroundTask 0
ApplePushServiceTask 0
UserIsActive 1
PreventUserIdleDisplaySleep 0
PreventSystemSleep 0
ExternalMedia 0
PreventUserIdleSystemSleep 1
NetworkClientActive 0
Listed by owning process:
pid 237(coreaudiod): [0x000472fb00018753] 00:09:40 PreventUserIdleSystemSleep named: "com.apple.audio.AppleHDAEngineOutput:1B,0,1,1:0.context.preventuseridlesleep"
Created for PID: 54000.
pid 127(hidd): [0x00045a9800098430] 01:53:42 UserIsActive named: "com.apple.iohideventsystem.queue.tickle.4295002300.3"
Timeout will fire in 599 secs Action=TimeoutActionRelease
Kernel Assertions: 0x100=MAGICWAKE
id=507 level=255 0x100=MAGICWAKE mod=5/9/17, 2:47 PM description=en0 owner=en0
Idle sleep preventers: IODisplayWrangler
Not all assertions prevent sleep, just PreventUserIdleSystemSleep
and PreventUserIdleDisplaySleep
. I turn these into a BASH alias sleepless:
alias sleepless="pmset -g assertions | egrep '(PreventUserIdleSystemSleep|PreventUserIdleDisplaySleep)'"
which gives you a nice, short list:
PreventUserIdleDisplaySleep 0
PreventUserIdleSystemSleep 1
pid 237(coreaudiod): [0x0005051000018753] 00:15:51 PreventUserIdleSystemSleep named: "com.apple.audio.AppleHDAEngineOutput:1B,0,1,1:0.context.preventuseridlesleep"
There are some things, like Bluetooth advertisements:
pid 370(useractivityd): [0x0005077200019fcf] 00:00:02 PreventUserIdleSystemSleep named: "BTLEAdvertisement"
that pop up briefly and don’t actually prevent sleep. Run sleepless a few times to get a sense of what’s actually stuck.
Once you know what process that’s keeping you awake is, you have to find the actual cause. While it’s possible that it’s an app you can just quit, more often than not, it will be a system process that an app is talking to, for example coreaudiod. So, you have to quit apps one at a time until you find the culprit. If it is coreaudiod, start with tabs in your browser.
For me, it’s always a Flowdock tab in Chrome. It will be different for you, but pretty quickly you’ll know where to look and sleep will come at last.
]]>It’s easy enough to do this sort of thing in Apache or NGINX. However, if the real site is in AWS, especially CloudFront, I like the simplicity and single purposeness of using S3 Static Website hosting to for redirects. To set this up you create an S3 bucket, enable static web hosting, turn on Redirect requests and give it the domain you want to redirect to. Done.
But, I’m lazy, I don’t like logging in to the AWS console (because I use MFA and so should you), and these tend to come in batches where the client not only wants a www redirect but has also registered a bunch of other random domains/TLDs, because SEO.
Fortunately, these buckets can be setup via the AWS CLI interface.
The first step is to create empty bucket. That’s easy:
aws s3api create-bucket --bucket www.example.com --region us-east-1
The trick, if you want to call it that, is that the bucket name must match the DNS name that will point to it. Any other name won’t work.
The second step is to add the redirection configuration. This is a blob of JSON that’s loaded from a file. That file should look like:
{
"RedirectAllRequestsTo": {
"HostName": "example.com"
}
}
Let’s call it example-redirect.json.
There a CLI command, put-bucket-website
, just for applying this:
aws s3api put-bucket-website --bucket www.example.com --website-configuration file://example-redirect.json
I know what you’re thinking. You’re thinking “This is a stupid post, it’s way easier to log in to the console than it is to create the JSON file.”. And, you’d be correct. To be easier, that step has to be taken care of automatically, and it can be with a bit of scripting magic:
aws_redirect() {
aws s3api create-bucket --bucket $1 --region us-east-1
local json_file=`mktemp`
printf '{"RedirectAllRequestsTo":{"HostName": "%s"}}\n' $2 > $json_file
aws s3api put-bucket-website --bucket $1 --website-configuration file://$json_file
rm $json_file
}
Most of this should be obvious, the script takes two arguments, the
site to redirect (which becomes $1
) and where to redirect to
($2
). The only tricky bit is creating the file of JSON. I use
mktemp
to generate a randomly named temp file. This avoids conflicts
and security issues that could arise if the name wasn’t random
(unlikely on your personal machine, but hey, why not do it right?). I
use printf
to generate it’s content, so I don’t have to mess around
escaping double quotes. Boom.
So, we’re down to:
aws_redirect www.example.com example.com
and that’s way easier than logging in to the AWS console.
Of course, there’s still DNS setup, but that’s another post.
]]>To make sure we’re protected, in a Rails app, simply turn on SSL with:
config.force_ssl = true
in config/environments/production.rb.
If for some reason you can’t use SSL for the whole app… It’s 2017, you have no excuse.
Opaque data (Optional). We are going to need to send an identifier for the current user to the browser. The simplest thing is to just send the database ID so we can do:
User.find_by id
(We’ll get to how we get id in below.)
However, if the database would leak information like how many users we have, we’ll need to do something else.
We can get opacity by creating a UUID when we create the user. A simple (slightly naive) approach. Add this to your model:
class user < ActiveRecord::Base
before_create :generate_uuid, unless: :uuid?
private
def generate_uuid
self.uuid = SecureRandom.uuid
end
end
and make sure you add an index to your migration:
add_index :users, :uuid, unique: true
(The naivety is that the is a tiny chance of a uuid collision that should be handled.)
Then there’s protecting the value from Cross Site Scripting (XSS) attacks. The scope of that is beyond this post, read up and find an expert if you need help. However, if you don’t allow any user-generated content into your iframe, you’re off to a good start. We’ll also make our tokens expire, which will reduce the usefulness of stolen tokens.
So, what it our token? A JSON Web Token (JWT). I covered these a while back, the short version is that a JWT is a standard for creating a signed blog of JSON as a string. Because it’s a string, it’s easily passed around. Because it’s signed it can’t be tampered with.
Add:
gem 'jwt'
to your Gemfile and bundle install
.
Creating a signed JWT is easy:
exp = Time.now.to_i + 5.minutes
payload = {user_uuid: @user.uuid,
exp: exp
}
jwt = JWT.encode payload, Rails.application.secrets.secret_key_base, 'HS256'
Payload is the hash to encode and sign. exp is a special key that
sets the expiration time, given in seconds since the
Epoch. I’ve set it to five
minutes in the future, but you’ll want to think about how long it will
be before your user resubmits the JWT and tweak
accordingly. Rails.application.secrets.secret_key_base
, generated
when you start a new rails project is a convenient signing key,
‘HS256’ is the signing algorithm. The resulting JWT is simply a
string.
Verifying it when you get it back is simple as well:
begin
jwt = JWT.decode jwt, Rails.application.secrets.secret_key_base ,true, { algorithm: 'HS256' }
rescue JWT::VerificationError
# Tampered with, handle it.
rescue JWT::ExpiredSignature
# Expired JWT, handle it.
end
payload = jwt.first
The decoded JWT is an array, with the first element being the payload and the second the containing some meta data about the JWT. Given that:
@user = User.find_by uuid: payload[:user_uuid]
On the client side, it’s a question of making sure you have the JWT to send with any request. In you view, you might do something like:
<script>
$(function() {
initialMyAwesomeApp("<%= @jwt %>");
});
</script>
Your awesome app would store the JWT and send it back in whatever AJAX calls it needs to make:
$.get('/api/object.json',
{jwt: jwt, object_id: 'something'},
function(object) { do_something; }
);
This will work pretty well. But, there is one last shortcoming. Cookies are persistent. If you navigate away and then return to a page, the cookie will be sent and the app will have state. The JWT is not persistent, it’s just part of the page, the user won’t be “logged in” next time they hit your app.
While that sounds bad, it’s usually is a non-issue. Truly, you only want to use this technique when you absolutely have to. That’s going to be when you are in an iframe in someone else’s app. Unless that app is utter crap, there’s going to be some kind of verification that you get when your page loads. For example, Atlassian Connect sends a signed JWT in the Authorization header. That verifies the current user when the page loads and all your JWT needs to do is verify your app’s AJAX calls.
In theory, you could use this form of authentication in a single page app, logging the user in when they hit the page and setting up the JWT. But. Don’t. Just don’t. It’s ugly and completely unnecessary.
This is a good technique when you absolute can’t depend on cookies. (And I’ll say it one more time.) If you can, do!
]]>Sessions are great. Great that is — until you can’t use them.
Sessions depend on cookies, and you can’t always use cookies. When can’t you use cookies? Safari in an iframe for one. I do a lot of integration work, add ons for various platforms, and typically you add on is embedded via iframes.
Let’s recap the advantages of Rails sessions:
This is awesome. And you should use sessions if you can. In fact, repeat after me:
I will always use cookie based sessions, unless I really, really, can’t.
The main security concern out of the box for Rails specifically is that session cookies are sent in the clear over HTTP, making them vulnerable to sniffing and in turn session hijacking. This is fixed by having your app always use HTTPS and by setting the secure flag on the session cookies, telling the browser to never send the cookie over HTTPS.
If you do something other than use cookies, the above is the bar you have to hit in terms of security, and doing so ranges from the easy to the impossible.
Easy is using HTTPS security. Cookies are set with all requests,
secure or otherwise, which is why the secure flag is
necessary. Whatever you do, you control the URLs you are talking to
and can insure they all use HTTPS. However, it would be wise to take
out some insurance and configure your app to only use HTTPS by setting
config.force_ssl = true
in config/environments/production.rb.
Medium is making your data opaque, and you may not care. Generally, it’s considered good practice not to expose database IDs. It’s low risk, however, because database IDs are usually sequential, they can give away information like how many users you have, the relative age of accounts, and so on. It’s a judgment call, but if it matters to you app or business model, you’ll need to do it.
This is be solved by encrypting IDs you want to protect.
Impossible is emulating the HttpOnly
flag. That functionality, is
deeply baked (see what I did there?) into the browser. You simply
can’t do anything like it. Your data will be on the page and if it’s
on the page, it’s vulnerable to Javascript attacks.
The good news is you’re in control of your page, if you follow best practices (beyond the scope of this post, but well documented) and, especially if you don’t have to display any user generated content, you should be able to lock it down.
Know that we know how to attack the security challenge, the key is to securely authenticate something that replaces the session cookie. In the final post of this series, I’ll do just that with a JSON Web Token (JWT).
]]>cap production ssh
This would open an SSH connection to the production server and cd you to the current directory for the app. Super handy if you work on a lot of projects and don’t want remember what the server names and deploy paths for all of them.
The Capistrano 3 has a much different internal structure and the gem is long abandoned. Fortunately, it’s easy to recreate this functionality in a view lines of code. Create lib/capistrano/tasks/ssh.rake and drop in:
desc "ssh to the current directory on the server"
task :ssh do
on roles(:app), :primary => true do |host|
command = "cd #{fetch(:deploy_to)}/current && exec $SHELL -l"
puts command if fetch(:log_level) == :debug
exec "ssh -l #{host.user} #{host.hostname} -p #{host.port || 22} -t '#{command}'"
end
end
The fiddly bits are the cd && exec
and the -t
. -t
makes sure we
have a terminal allocated on the server. Normally, when executing
commands (as opposed to starting a shell), a terminal isn’t needed,
but in this case we’re manually starting a shell, and we need it.
SSH, and then cap
, will exit when the remote command finish. If we
just ran cd
the command would login, change directory and then
finish. Exec-ing a shell after the cd
causes SSH to want for the
command, which will only finish when we log out. Exec-ing $SHELL
instead of hardcoding bash, given us some flexibility.
It’s a simple thing, but a nice time saver.
And for bonus points, the most common reason to open a shell to the server is to play around in the console. You can easily modify this task to do that with:
desc "open rails console on server"
task :console do
on roles(:app), :primary => true do |host|
command = "cd #{fetch(:deploy_to)}/current && bundle exec rails console #{fetch(:rails_env)}"
puts command if fetch(:log_level) == :debug
exec "ssh -l #{host.user} #{host.hostname} -p #{host.port || 22} -t '#{command}'"
end
end
(You could append this to ssh.rake or create a separate console.rake.)
Note that when SSH is used to run a command, it does source your shell variables. If you use RVM or rbenv, you may need to manual set up you environment. For example, an RVM use might need:
command = "source /etc/profile.d/rvm.sh; cd #{fetch(:deploy_to)}/current && bundle exec rails console #{fetch(:rails_env)}"
Using ; instead of && will cause the cd and bundle to run even if the source fails, a quick and dirty way to make this work in cases where some servers are using RVM and some are not.
]]>“Sessions” are a common concept across web development platforms, but interestingly, they are actually a standard like, say, cookies. Sessions use cookies and typically work more or less the same what. However, each implementation is different, a Rails session would not be readable by PHP and vice versa.
The concept it self is simple. Instead of storing different values in separate cookies, have a single cookie that can be used to looking up a collection of settings for a “session”. Actually, that’s lie given how the default Rails (2+) implementation works, but let’s run with it as it’s easier to explain.
In Rails, session
is a special hash like object available in your
controller, not that different from the cookie
hash-like thing. You
can set values:
session[:current_user_id] = user.id
and retrieve them anytime the visitor returns:
user = User.find(session[:current_user_id])
The differences are that the session is represented by just one cookie, which is the sessions id and, as a result, the data stored in the session is not available to the user. They just see the session id.
Under the hood, you could image it looks something like:
session = Session.find(cookie[:session_id]).data
where data is a hash.
Now, if you were paying attention when I wrote about cookie security, you might see a problem. As I described, it’s relatively easy to modify a cookie. If I can guess someone’s session ID and swap out mine for there’s, boom I’m logged in as them. This is solved by making the session id secure in some way, the easiest method being to use a big random number, making it impossible, or at least very, very impractical to guess a value id.
So, that’s how they work, but what was my lie? That sessions are stored on the server.
If you store session data on the server, you have to deal with it. In very old (pre 2.0) versions of Rails, sessions were stored in individual files. If you forgot to clean them up, you’d find a grumpy sysadmin banging on you door wanting do know what these 50,000 files were filling her disk. You could use a database instead, but now you have 50,000 rows. Sessions are automatically created when you use them, but cleaning out old ones requires some sort of process. That’s unless you make it someone else’s problem.
The default Rails cookie store is EncryptedCookieStore
. The session
hash is serialized and set as the cookie’s value. An encrypted cookie
is used so that the recipient can’t see the stored values. (Prior to
Rails 4 CookieStore
was used, which worked the same way, but wasn’t
encrypted so stored values were (with a bit of effort) visible.
The cookie approach puts the burden of storing the session data on the visitor. The browser will clean up the cookie when it expires. Pretty handy! And, it solves the “guess the session id” issues, the session cookie value is a blob of encrypted text, effectively a big, random number.
There is a limitation however, the maximum size of a cookie is 4K, include all of the meta data, like the name of the cookie and the expiration and any encryption and serialization overhead. If you are storing more than 4K in your session, they you are probably doing it wrong. Typically, session data should be pretty small, a user id, a timezone, a language preference.
If you really want to store more you can look at the MemCache Store
Out of the box, Rails session cookies have HttpOnly, meaning the are not available to JavaScript and thus protected from cross site scripting attacks (where malicious JavaScript is injected to steal cookies). Do yourself a favor and leave it that way. (Cookies are sent by AJAX requests, so auth is unaffected.)
One issue that Rails sessions don’t solve out of the box is sending the session cookie securely. By default, they will be sent over HTTP, allowing them to be sniffed on the network. An attacker could then reused the cookie, hijacking the session.
Hijacking is easily prevented by using HTTPS and there’s no reason to
run a web app without it, so set it up on your production server and
set config.force_ssl = true
config/environments/production.rb. As
the name implies, this will force Rails to always use HTTPS.
In addition, edit config/initializers/session_store.rb and change
Rails.application.config.session_store :cookie_store, key: '_your-app_session'
to:
Rails.application.config.session_store :cookie_store, key: '_your-app_session',
secure: Rails.env.production?
This will tell the browser never to send the cookie over HTTPS. This prevents accidents if parts of your site outside of the Rails app are accessible via HTTP.
One last security tip, sessions stored in cookies are vulnerable to being replayed, so don’t store information you don’t want the user reusing. Say you had a game and your user can by credits to play it.
session[:credits] = 5
They buy five credits, save a copy of the cookie, and then proceed to play five games.
if session[:credits] > 0
session[:credits] -= 1
play_game
end
They are now out of credits. Now they set the cookie back to the original value they saved, back to five credits.
Design your sessions usage such that there is no advantage to reusing an old one:
user = User.find(session[:current_user_id])
user.credits = 5
if user.credits > 0
user.update(credits: credits - 1)
play_game
end
(Multiple logins and race conditions are left as a exercise for the reader.)
Now you have a little more depth with sessions, now we can finally talk about what happens when you don’t have cookies. But, that’s next time.
]]>As you probably know an
HTTP cookie is a key
value store. Under the hood, the key/value pair is sent as an HTTP
header, Set-Cookie
. When a browser sees this header it stores the
cookie. Next time the browser makes a request to the server, it sends
the cookie back in the Cookie
header. When setting cookies, multiple
cookies == multiple Set-Cookie headers. The browser, on the other
hand, can send multiple semicolon separate cookies in the same
Cookie header.
If you met them on the street, they’d look something like:
Set-Cookie: key=value
Set-Cookie: key2=another-value
Cookie: key=value; key2=another-value
All web frameworks, and even web friendly programming languages,
provide some structure so you don’t have to generate and parse cookie
headers yourself. In Rails, it’s a magical hash-like object called,
wait for it, cookies
! What can we do with it? A fine example is
saving a visitor’s language preference.
We set it:
cookies[:language] = 'italiano'
and, as the visitor continues to browse the site, we can get it back:
language = cookies[:language]
By default, cookies are scoped to the host name, so a cookie from www.example.org would not be available to shop.example.org. (Cookies are never available cross domain, so foo.org is never going to see cookies from example.org.) However, the scope can be expand to the domain, i.e. .example.org, in which case cookies set by www are available to shop and vice versa. With this, we can set a visitor’s language preference across all of our sites:
In Rails you set cookie options by passing in a hash instead of just a value:
cookies[:language] = {value: 'italiano', domain: 'example.org'}
Cookies can also by scoped by path. By default cookies are available to any URL on the site, but it’s possible to limit them to a path, like /shop. Typically, this is used to avoid collisions when different apps use the same name for cookies. Again in Rails:
cookies[:key] = {value: value, path: '/shop'}
Cookies can and will expire. The default is when the “session ends” which typically means when you quit the browser. However, you can also set an explicit expiration. Say we need to display a disclaimer every 30 days:
cookies[:viewed_disclaimer] = {value: true, expires: 30.days.from_now }
Rails also has a shortcut for “permanent” cookie, which creates a cookie that expires in 20 years.
cookies.permanent[:viewed_disclaimer] = true # Once is enough.
Keep in mind nothing is truly “permanent”, least of all data. The visitor may delete the cookies at anytime. They may also switch browsers or visit from a different device, neither of which will have the cookie set.
Cookies have two very important security settings, Secure and HttpOnly. Both of these are off by default in Rails.
Secure says “Only send this cookie over HTTPS.” When this is off, the browser will send the cookie every time it visits the site, when it’s on, the browser will only send the cookie when it visits HTTPS URLs, thus protecting the cookie from being sniffed on the network.
cookies[:private] = {value: secret', secure: true }
Note that this setting if for the browser only! It’s up to you to make sure the server doesn’t send “secure” cookies over HTTP.
HttpOnly is confusing as it doesn’t have anything to do with HTTPS vs
HTTP. What it actually means is not available to JavaScript (only
available to the HTTP protocol stack). When this is off, JavaScript
can access the cookie via document.cookie
, when it’s on, the cookie
is invisible to JavaScript. This prevents cookie data from being
stolen via
Cross-site scripting (XSS)
attack.
cookies[:hidden] = {value: 'No cookie for you!', httponly: true }
Finally, rails provides two extensions to prevent tampering with or even viewing cookies values. Cookies are opaque to your average user, but with a bit of knowledge your can change them browser side. Consider a logged in user, you might store their user ID in a cookie and use it to display the correct dashboard. If the user changed that value, they would be able to view other users’ dashboards. Rails prevents this with signed cookies:
cookies.signed[:user_id] = current_user.id
Read them back with:
user_id = cookies.signed[:user_id] # cookies[:user_id] would return gibberish
An exception will be raised if the cookie has been tampered with.
If you want to both protect the cookie and hide it’s value, you can encrypt it instead:
cookies.encrypted[:user_id] = current_user.id
user_id = cookies.encrypted[:user_id] # again cookies[:user_id] is unreadable
Again, an exception will be raised if the cookie has been tampered with and thus can’t be decrypted.
If you get one thing out of this article it should be this, while cookies are handy for keeping a bit of state information, but once you need security, you have to pay attention do a bit of work. By default your cookies and the data they contain are a) public and b) not trustworthy.
It would be nice if there a tool to help make security little more foolproof. And of course, that’s our topic next time, Sessions!
]]>There are lots of excellent solutions out there for managing public keys within an organization, but these servers are one-offs, so that infrastructure isn’t going to get built.
The good news is that 9 out 10 times the people in question have
GitHub accounts, and if someone has a GitHub account, they likely have
a public SSH public key.
To get a GitHub user’s key, you just go to
https://github.com/<username>.keys
, mine are here
https://github.com/spikex.keys.
If you get a blank page, then they don’t have one. Back to the old email drawing board.
It’s no secret I’m lazy, so naturally I have this automated with a BASH function:
function get_public_keys () {
mkdir -m 700 -p ~/.ssh
for user in "$@"
do
curl -s https://github.com/"$user".keys >> ~/.ssh/authorized_keys
done
chmod 600 ~/.ssh/authorized_keys
}
You use it like:
get_public_keys spikex imadethisoneup
Breaking it down:
mkdir -m 700 -p ~/.ssh
creates the .ssh directory if it doesn’t
exist and sets permissions that will make sshd happy. If it does
exist, the command is a NOOP.
for user in "$@"
loops through the arguments to the function,
which is what allows you to specify multiple usernames.
curl -s https://github.com/"$user".keys >> ~/.ssh/authorized_keys
download the key(s) and append it to the ~/.ssh/authorized_keys.
chown 600 ~/.ssh/authorized_keys
again set permissions to make
sshd happy, if we happened to create authorized_keys.
Boom! Done!
]]>JWTs look like:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2lkIjoxMjMsIm5hbWUiOiJKb2huIEQuIiwiYWRtaW4iOnRydWV9.AOTcSDyeCX-P5Huzb_Rc9AlHwvWBZlj9E9qZZ9dpI8U
If we were to split that on the “.”, we get:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
eyJ1c2VyX2lkIjoxMjMsIm5hbWUiOiJKb2huIEQuIiwiYWRtaW4iOnRydWV9
AOTcSDyeCX-P5Huzb_Rc9AlHwvWBZlj9E9qZZ9dpI8U
The first string is a JSON header with details about the JWT, the second the JSON payload, the data we care about, and the third is a signature, a hash of the header, the payload, and some secret that can be used to verify that JWT as a whole hasn’t been tampered with. All are encoded using Base64URL.
Fortunately, we can leave the implementation details to others. I’m using the Ruby jwt gem, however there are libraries for everything from Go to Perl(!).
Given a Ruby hash (or anything the response to to_json):
payload = {user_id: 123, name: 'John D.', admin: true }
secret = "I'll never tell!"
I can create a JWT using:
token = JWT.encode payload, secret, 'HS256'
=> "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2lkIjoxMjMsIm5hbWUiOiJKb2huIEQuIiwiYWRtaW4iOnRydWV9.AOTcSDyeCX-P5Huzb_Rc9AlHwvWBZlj9E9qZZ9dpI8U"
The token can now be shared, but not changed. It’s contents can be viewed by decoding the JWT:
decoded_jwt = JWT.decode token, secert, true, { algorithm: 'HS256' }
(The true
is the verify argument, and we are using the options
has to specify what hashing algorithm we are expecting (see below).)
JWTs decode as an array that contains the payload as the first element and some header information about the JWT as the second:
[
{
"user_id" => 123,
"name" => "John D.",
"admin" => true
},
{
"typ" => "JWT",
"alg" => "HS256"
}
]
So, your payload is token.first
.
It’s important to understand that the JWT is not encrypted, you can use a blank password and set verify to false and read it:
decoded_jwt = JWT.decode token, '', false, { algorithm: 'HS256' }
JWTs aren’t for keeping secrets. There’s another standard JSON Web Encryption (JWE) for that, we’ll look at another time.
What are they for? Authentication and safely storing state information. If I pass a JWT with the above payload to client, and it passes it back, I can verify that it hasn’t changed, so I can trust the user_id, I set it after all.
How do you use it? I’ll cover that next time!
But, before I go, we need have a little talk about security.
Notice the JWT header above includes the signing algorithm, HS256. This can be detected by the JWT library and used when decoding.
decoded_jwt = JWT.decode token, secert, true
However, it’s bad form to depend on this.
One early attack was to generate a new, unsigned JWT (algorithm ==
'NONE'
) and pass that back to the app. Libraries would ignore the
password when decoding an unsigned JWT and it would appear to be
valid. Most libraries now detect this condition (using a password with
NONE) and raise an error. However, there are other possibly attacks
that work by subverting the expected algorithm, so always specify the
algorithm.