Want to upload an image directly to S3 from the browser? Here’s what you need.

UPDATE: I should clearly note that what I’m doing is allowing uploads without the involvement of a server. This is very handy when working with static sites. However, if you have your own server serving up the upload for, then the more traditional approach is to use Pre-Signed URLs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var upload_file = function(path,file_field) {
    var file = file_field[0].files[0];
    var fd = new FormData();
    fd.append('key', file.name);
    fd.append('acl', 'bucket-owner-full-control');
    fd.append('Content-Type', file.type);
    fd.append("file",file);


    return $.ajax({
        type : 'POST',
        url : path,
        data : fd,
        processData: false,  // Don't process the data
        contentType: false,  // Don't set contentType
        success: function(json) { console.log('Upload complete!') },
        error: function (XMLHttpRequest, textStatus, errorThrown) {
            console.log('Upload error: ' + XMLHttpRequest.responseText);
        }
    });
};

$( '#s3-form' ).submit(function( event ) {
    event.preventDefault();
    var = file_field = $('form').children('input[type=file]')
    upload_file('http://bucket.example.com/s3-us-west-1.amazonaws.com/',
                file_field);
}

Pretty simple looking, eh? Let’s walk through it.

First, we extract the actual file upload object from the jQuery file input field object.

Then, we created a FormData object. Normally, you get one of this from a form, but we want tight control of whats in it and we want to avoid passing an extraneous fields. Just POSTing a form to S3 stores the application/x-www-form-urlencoded data, which is not what we want.

Next we use .append set some key value pairs, which equate to input fields in a form.

  • key - The name of the file.
  • acl - The permissions in the bucket.
  • Content-Type - The file’s type, insuring S3 stores it correctly.
  • file - The actual file input field.

Here we’re setting key to be whatever the name of the file is. This may not be the best approach as there could be conflicts. A UUID would be safer.

You have a number of options for acl, but be sure to set one. In the past, at least, it was possible to upload files and find that, even though you could read the bucket, you could do anything with the files. private and public-read are other good options.

Next, the AJAX call. Setting processData to false tells jQuery not to convert the data into a query string and not to set the content-type to “application/x-www-form-urlencoded” (which is it’s default behavior). If it did so, S3 will store the encode form data as a file, which is so not what we want.

Likewise, setting contentType to false keeps jQuery from doing anything that might override the content-type we are setting in the form data.

Now that our upload_file function is defined, we need to call it. It takes two arguments, path, which is the S3 bucket name, and the jQuery file input field object.

That’s the client side code, but it won’t actually work without some configuration on the S3 side.

First, the bucket needs to be publicly writable policy, which is configured on the buckets Properties tab under Permissions > Bucket Policy Editor. The policy will look something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "Id": "Policy1457459992873",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1457459991497",
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucket.example.org/*",
      "Principal": "*"
    }
  ]
}

The goal is to allow anyone (“*”) to access to “s3:PutObject”, which writes the bucket.

You can tweak permissions and try different options using the AWS Policy Generator.

Wait! What? World writable bucket!?!? Yep, that’s what I said. Any you’re right to be concerned. It’s unlikely someone would randomly discover this, but someone could look at your code and work it out.

What are the risks?

  1. A DoS attack. Someone can fill your bucket with crap. This isn’t really different from handing file uploads in your app. A villain can just POST tons of files. It’s easier to watch for this abuse on you own server. However, your server is likely to fall over a lot sooner than S3.

  2. If the uploads are configured to be publicly readable, someone launches a file sharing service out of your bucket. Unlikely, there are better ways to do that, but not impossible.

  3. Someone overwrites your uploads. Of course, to do that, they’d need to know the names of the files, we’re not allowing listing of the bucket contents. That’s why I’d suggest UUIDs as filenames.

1 & 2 are best solved by monitoring the bucket. Either by using S3 Event Notifications or your own through the API.

We also need a CORS header. Because the AJAX Post to amazon.com domain is cross-origin, we need is for S3 to send the Access-Control-Allow-Origin HTTP header. This is configured on the bucket under Permissions > Edit CORS Configuration on the Properties tab.

This will work, though you might want to fine-tune AllowedOrigin to just the domains you need. More details can be found at https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#how-do-i-enable-cors

1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

And boom! Your “app” is uploading images without any server side code1! Down the road, we’ll look at what you might do with it.


  1. OK, there’s a metric ton of S3 server code behind this, but it’s not your problem.

Comments