Outline
Here's the command for creating a new bucket. The --profile
argument is necessary because I've got more than one AWS account configured on this computer. The credentials
file in my .aws
directory identifies each configured account as a distinct profile and would look something like this. And don't worry: those aren't real credentials.
[default]
aws_access_key_id = AKIAWGIP3J3EM9UB7LN4
aws_secret_access_key = lc4nBHxlZ//QxtyIR43)PzjhHNqfj+PIrnUl+sdf
[bootstrap]
aws_access_key_id = AKIAIEUS9G27WWKPOTBQ
aws_secret_access_key = WQSOIBHsdZ/o@xtyIR43)PzhLNM9jL23rnUl-=Ws
This particular command uses the S3API CLI rather than the more common AWS S3 version. It just happens that the syntax works for me in this case. The command itself is create-bucket
. The s3 version would be just mb
, which stands for "make bucket".
The name I'd like to give the bucket is given as the --bucket
argument. Two things about this name: one, the name has to be globally unique throughout the entire S3 system. So if you go for something that's already taken, the command will fail. And, two, if you want to use CloudFront, S3 buckets for static websites must have the exact "fully qualified domain name" that matches your domain. In my case, my website will be a subdomain of my thedataproject.net domain.
Finally, the --acl
argument - where "acl" stands for "access-control list" - defines who can access the bucket contents. Since this is an internet-facing, public website, we'll have to open it up with this public-read
settings. For very good reasons, the default settings would completely shut down access to the bucket.
aws s3api create-bucket \
--bucket "mysite.thedataproject.net" \
--acl public-read --profile bootstrap
The text that comes back when I run the command tells us that we were successful.
Obviously, we're going to have to add some content to the bucket. It'd be a pretty sad website without any of that. Each of these files contains very minimal HTML - just enough to let us tell them apart from one another when they're requested. The aws s3 sync
command will update the current contents of the specified bucket so they match what's referenced locally. The dot tells the CLI to include all the contents of the current local directory. Since there are no current contents in the bucket, the local files will simply be copied over. Once again, we need to specify a publicly readable acl
- this time, it refers to the files we're adding to the bucket.
aws s3 sync . s3://mysite.thedataproject.net \
--acl public-read \
--profile bootstrap
That's enough to give us a working website. Assuming we got the permissions right, the internet should be just a little bit bigger right now. Let me head over to take a look and make sure it all happened the way I expected. Getting the right URL can be a bit tricky. Perhaps someone will correct me on this, but I've never found a way to retrieve an S3 URL from the CLI. Of course, you could always get it through the browser GUI, but that takes time and, frankly, kind of defeats the purpose.