A couple of posts back, I walked through uploading an image to AWS S3 without the need for a server of your own. This has the advantage of being a fully standalone, browser client only solution. Of course images by themselves aren’t very useful. Likely we want to collect some additional information.

Browser to S3 uploads have the downside risk of creating a world writable S3 bucket. Why take that risk? Scale. With S3, Amazon has built out a huge amount of infrastructure that’s Not Your Problem™. And their economies of scale mean they’ve done it more cheaply than you could, which translates into the cost being lower than you could achieve on your own.

(Don’t take this as an commercial for AWS, there are many situation where I feel your are overpaying for convenience. However, we you need large scale infrastructure on demand, it’s a clear win.)

For this example, let’s say we are responsible for collecting entries for a photo contest. It’s going to run during a highly watched TV show and we’re going to get slammed with responses over a fairly small window.

Before you get started, review the bucket settings you need to allow direct uploads.

We’re going to collect name, email, caption and the photo thusly:

1
2
3
4
5
6
7
<form id="entry-form">
    <p><input name="name"></p>
    <p><input type="email" name="email"></p>
    <p><input name="caption"></p>
    <p><input id="photo" type="file" name="photo" accept="image/x-png, image/gif, image/jpeg" /></p>
    <button type="submit">Enter!</button>
</form>

Hopefully, your form is prettier (labels would be nice).

With one addition, our previous code for uploading the image will work fine here. Because we are going to get tons of uploads, the odd are we are going to have duplicates. We need to generate a unique name for each photo which we’ll also use as the identifier for our data. There are any number of Javascript UUID libraries, but I happen to use this one. Once downloaded:

1
<script src="uuid.js"></script>

And our image uploader code becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var upload_image = function(url,form,id) {
    var field = $(form).find('input[name=photo]');
    var file = field[0].files[0];
    var original_name = field.val();
    var extension = original_name.substr((original_name.lastIndexOf('.')));
    var filename = id + extension;

    var fd = new FormData();
    fd.append('key', filename);
    fd.append('acl', 'bucket-owner-full-control');
    fd.append('Content-Type', file.type);
    fd.append("file",file);

    return $.ajax({
        type : 'POST',
        url : url,
        data : fd,
        processData: false,  // tell jQuery not to convert to form data
        contentType: false  // tell jQuery not to set contentType
    });
};

id will be the S3 filename, our UUID. We’ll get why we’re returning the $.ajax call in a minute.

Now we need to upload the rest of the form:

1
2
3
4
5
6
7
8
var upload_data = function(path,data) {
    return $.ajax({
        type:     'PUT',
        url:      path,
        headers:  {'x-amz-acl' : 'bucket-owner-full-control'},
        data: JSON.stringify(data)
    });
};

Unlike with the image upload, where the filename is inferred from the upload data, path here needs to be the full bucket path, including the filename with the .json extention. S3 will simply write that data string into the file.

data is just an object that we can turn in to JSON. To get the form into that data object, I use this snippet which I got long ago I believe from this post http://stackoverflow.com/questions/1184624/convert-form-data-to-javascript-object-with-jquery

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$.fn.serializeObject = function()
{
    var o = {};
    var a = this.serializeArray();
    $.each(a, function() {
        if (o[this.name] !== undefined) {
            if (!o[this.name].push) {
                o[this.name] = [o[this.name]];
            }
            o[this.name].push(this.value || '');
        } else {
            o[this.name] = this.value || '';
        }
    });
    return o;
};

That code has some limitation on how the fields are named, see the stackoverflow post or just do a little Googling on the topic if you run in to issues.

Given our uploaders, we can glue it all together like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$('#entry-form').submit(function( event ) {
    event.preventDefault();
    var bucket = 'https://s3.amazonaws.com/bucket.example.com/';
    var id = window.uuid.v4();
    var data = $(this).serializeObject();
    // This is where you would disable the form and start a spinner
    var path = bucket + id + ".json";
    $.when(upload_data(path,data),upload_image(bucket,this,id))
        .done(function(r1,r2){
            if (!r1[0] && !r2[0]) {
                console.log('Upload complete!');
                // Stop that spinner, let the user know.
            } else {
                // One of the ajax calls failed.
                console.log(r1);
                console.log(r2);
            }
        });
});

The reason we are returning the results of the $.ajax calls in our functions is so that we can pass them to $.when. You can read up on the details, but for our purposes $.when will fire the done callback only when both of our $.ajax have completed, allowing us to do whatever comes next. r1 is the result of the first function we passed in, r2 the second. You will want some smarter error handling, yes?

This is running a bit long, so I’ll save the question of what to do with the data for next time.

Comments