Amazon Cloudfront
Amazon Cloudfront

Uploading data, for example your off-site backup files, to Amazon S3 is easier than you might think. Here are some basic steps with links.

First, set up an Amazon AWS account if you don’t already have one. You will need a credit card (of course!). Then, create your Access Key and Secret Key (Access Identifiers).  AWS uses two identifiers to authenticate requests to AWS and to identify the sender of a request.  Two types of identifiers are available: (1) AWS Access Key Identifiers (AKIs), and (2) X.509 Certificate.   AWS AKIs appear to be easier for average joe users like me, so that is what I decided to use.      In addition, sign up for S3 and CloudFront services (easily done while in your account, just a few clicks and you are there).

Second, using your AKIs, access your S3/CloudFront services on AWS using the Firefox S3Fox Organizer or a desktop applications like the Manager for Amazon CloudFront.   I found the Firefox S3 Organizer (plugin) to be very nice (however, there was a small bug where I had to re-enter my credentials after I closed and reopened the browser.)

On your server side, assuming you want to upload a large backup file from a Linux server, I recommend you download Amazon S3tools, which requires Python 2.4 or newer and some pretty common Python modules.   You can use this command line tool to directly access your S3 buckets (similar to a directory, but must be a unique name accross the entire S3 network).

To give you an example of the power of s3cmd, here is a part of the manpage:

s3cmd – tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network


s3cmd  is a command line client for copying files to/from Amazon S3 (Simple Storage Service) and performing other related tasks, for instance creating and removing buckets, listing objects, etc.

s3cmd can do several actions specified by the following commands.

mb s3://BUCKET
Make bucket

rb s3://BUCKET
Remove bucket

ls [s3://BUCKET[/PREFIX]]
List objects or buckets

la     List all object in all buckets

Put file into bucket (i.e. upload to S3)

Get file from bucket (i.e. download from S3)

Delete file from bucket

Backup a directory tree to S3

Restore a tree from S3 to local directory


Make a copy of a file (cp) or move a file (mv).  Destination can be in the same bucket  with  a  different name  or in another bucket with the same or different name.  Adding –acl-public will make the destinatiob object publicly accessible (see below).

setacl s3://BUCKET[/OBJECT]
Modify Access control list for Bucket or Files. Use with –acl-public or –acl-private

info s3://BUCKET[/OBJECT]
Get various information about a Bucket or Object

du [s3://BUCKET[/PREFIX]]
Disk usage – amount of data stored in S3

Commands for CloudFront management

cflist List CloudFront distribution points

cfinfo [cf://DIST_ID]
Display CloudFront distribution point parameters

cfcreate s3://BUCKET
Create CloudFront distribution point

cfdelete cf://DIST_ID
Delete CloudFront distribution point

cfmodify cf://DIST_ID
Change CloudFront distribution point parameters

If you did all the above correct (which is quite easy if you have basic sys admin skills), you have configured s3cmd with your Access and Secret key, per the installation instructions, and the installation script ran the access tests OK.  When I did this, the installation was flawless and worked perfectly (lucky me!).  I could easily access my S3 buckets, create new buckets, and upload and download files (with one exception).

There was one small glitch.   When I tried to upload a file, I got an error:

ERROR: S3 error: 400 (MalformedXML): The XML you provided was not well-formed or did not validate against our published schema

Doing what most of us do, I googled on the error message and, sure enough, there was the answer on the same developers site:

… there is a bug in s3cmd that causes this error. To work around it add a slash at the end of your bucket name.

s3cmd put your_file.txt s3://mysandbox/

So, after a few minutes of setting up an account, installing a bit of software, and finding the solution to a small bug, I uploaded a 110 MB MySQL gzipped backup to S3.  I’ll let you know how many pennies I am billed for this cloud service when the bill arrives 🙂


Comments are closed.