Uploading data, for example your off-site backup files, to Amazon S3 is easier than you might think. Here are some basic steps with links.
First, set up an Amazon AWS account if you don’t already have one. You will need a credit card (of course!). Then, create your Access Key and Secret Key (Access Identifiers). AWS uses two identifiers to authenticate requests to AWS and to identify the sender of a request. Two types of identifiers are available: (1) AWS Access Key Identifiers (AKIs), and (2) X.509 Certificate. AWS AKIs appear to be easier for average joe users like me, so that is what I decided to use. In addition, sign up for S3 and CloudFront services (easily done while in your account, just a few clicks and you are there).
Second, using your AKIs, access your S3/CloudFront services on AWS using the Firefox S3Fox Organizer or a desktop applications like the Manager for Amazon CloudFront. I found the Firefox S3 Organizer (plugin) to be very nice (however, there was a small bug where I had to re-enter my credentials after I closed and reopened the browser.)
On your server side, assuming you want to upload a large backup file from a Linux server, I recommend you download Amazon S3tools, which requires Python 2.4 or newer and some pretty common Python modules. You can use this command line tool to directly access your S3 buckets (similar to a directory, but must be a unique name accross the entire S3 network).
To give you an example of the power of s3cmd, here is a part of the manpage:
s3cmd – tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network
s3cmd [OPTIONS] COMMAND [PARAMETERS]
s3cmd is a command line client for copying files to/from Amazon S3 (Simple Storage Service) and performing other related tasks, for instance creating and removing buckets, listing objects, etc.
s3cmd can do several actions specified by the following commands.
List objects or buckets
la List all object in all buckets
put FILE [FILE…] s3://BUCKET[/PREFIX]
Put file into bucket (i.e. upload to S3)
get s3://BUCKET/OBJECT LOCAL_FILE
Get file from bucket (i.e. download from S3)
Delete file from bucket
sync LOCAL_DIR s3://BUCKET[/PREFIX]
Backup a directory tree to S3
sync s3://BUCKET[/PREFIX] LOCAL_DIR
Restore a tree from S3 to local directory
cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2], mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Make a copy of a file (cp) or move a file (mv). Destination can be in the same bucket with a different name or in another bucket with the same or different name. Adding –acl-public will make the destinatiob object publicly accessible (see below).
Modify Access control list for Bucket or Files. Use with –acl-public or –acl-private
Get various information about a Bucket or Object
Disk usage – amount of data stored in S3
Commands for CloudFront management
cflist List CloudFront distribution points
Display CloudFront distribution point parameters
Create CloudFront distribution point
Delete CloudFront distribution point
Change CloudFront distribution point parameters
If you did all the above correct (which is quite easy if you have basic sys admin skills), you have configured s3cmd with your Access and Secret key, per the installation instructions, and the installation script ran the access tests OK. When I did this, the installation was flawless and worked perfectly (lucky me!). I could easily access my S3 buckets, create new buckets, and upload and download files (with one exception).
There was one small glitch. When I tried to upload a file, I got an error:
ERROR: S3 error: 400 (MalformedXML): The XML you provided was not well-formed or did not validate against our published schema
Doing what most of us do, I googled on the error message and, sure enough, there was the answer on the same developers site:
… there is a bug in s3cmd that causes this error. To work around it add a slash at the end of your bucket name.
s3cmd put your_file.txt s3://mysandbox/
So, after a few minutes of setting up an account, installing a bit of software, and finding the solution to a small bug, I uploaded a 110 MB MySQL gzipped backup to S3. I’ll let you know how many pennies I am billed for this cloud service when the bill arrives 🙂