IT Blog

AWS Technical Articles

AWS Command Line Interface Primer

This tutorial is a quickstart quide to using the AWS Command Line Interface (CLI).

The guide assumes you have installed the CLI tools. If you’re running with a Mac or Linux, simply download and install Python and run:

pip install awscli

On Windows you can install the binaries using the following link: https://s3.amazonaws.com/aws-cli/AWSCLI64.msi (64-bit) or https://s3.amazonaws.com/aws-cli/AWSCLI32.msi (32-bit).

Our aim with this guide is to deploy a simple Centos 7 ami that will be used as a web server in addition to setting up a simple S3 bucket we can use to host some files.

Once the cli is installed, we can run the aws command and see what options are available:

$ aws
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

The format of aws commands is to run it against a set of commands be it “ec2” or “s3” for example then a set of sub-commands for example “describe-instances” with the associated arguments.

Before we can successfully run the aws command, we need to be able to authenticate against AWS services. we do this by either setting environment variables with our API credentials, adding our API credentials into the AWS profile or via the command line. We will add our credentials into our profile. To do this:

  1. Create a file called “credentials” in your “.aws” directory (within your home drive if using Mac/Linux)
  2. Update the values for aws_access_key_id and aws_secret_access_key within the [default] section

Your file should look something like this:

$ cat .aws/credentials
[default]
aws_access_key_id = your_keyid_goes_here
aws_secret_access_key = your_key_goes_here

Now run a test and make sure you were able to successfully connect:

$ aws ec2 describe-instances --output=table

Now that we can run our API commands, let’s start off by creating a SSH key-pair so we can access our ami later on (you can also use “ec2-import-keypair” sub-command of ec2 to import your own).

$ aws ec2 create-key-pair --key-name test-keypair

We will have to copy from —–BEGIN RSA PRIVATE KEY—– through —–END RSA PRIVATE KEY—–, and put it into a file. We will place it in a file called .ssh/test-keypair.pem. You will also need to change the permission on the file so it is only readable by yourself (logged in user):

$ chmod 0400 ssh/test-keypair.pem

Once the key-pair has been created, we want to create a security group to allow access to our server. We will open port TCP/80 (HTTP) and SSH(TCP/22).

$ aws ec2 create-security-group --group-name webserver-sg --description "security group for web server in EC2"
{
        "GroupId": "sg-XXXXXX"
}

The output of the above is your newly created security group’s ID. As we defined a name for the group, we can use it to map the ports to open:

$ aws ec2 authorize-security-group-ingress --group-name webserver-sg --protocol tcp --port 22 --cidr 0.0.0.0/0
$ aws ec2 authorize-security-group-ingress --group-name webserver-sg --protocol tcp --port 80 --cidr 0.0.0.0/0

By default, we should have a VPC that we will connect our instance to. You can use the “describe-vpcs” sub-command to tell us if this is the case:

$ aws ec2 describe-vpcs
{
    "Vpcs": [
        {
            "VpcId": "vpc-XXXXXX",
            "InstanceTenancy": "default",
            "CidrBlockAssociationSet": [
                {
                    "AssociationId": "vpc-cidr-assoc-XXXXXX",
                    "CidrBlock": "172.31.0.0/16",
                    "CidrBlockState": {
                        "State": "associated"
                    }
                }
            ],
            "State": "available",
            "DhcpOptionsId": "dopt-XXXXXX",
            "CidrBlock": "172.31.0.0/16",
            "IsDefault": true
        }
    ]
}

If you have a default VPC configured proceed to the next step to create the instance:

$ aws ec2 run-instances --image-id ami-b6bb47d4 --security-group-ids webserver-sg --count 1 --instance-type t2.micro --key-name test-keypair --query 'Instances[0].InstanceId'

With the above command, we spun up an ec2 ami instance with the following parameters:

  • AMI Image ID – This is the centos 7 x64 based image available which can be found on  the AWS marketplace
  • Security Group – This is the security group we created earlier
  • Count – How many instance we which to deploy
  • Instance Type – Here we are using t2.micro
  • Key Pair – This is the keypair we created earlier
  • Query – Here we are asking for the ID of the newly created instance

Once the above is completed, you should receive the output which is the instance-id. Now we would like to allocate a Public IP address to the instance so we can access it externally. First, we will need to request a new Public IP address within our VPC:

$ aws ec2 allocate-address --domain vpc
 {
    "PublicIp": "1.1.1.1",
     "Domain": "vpc",
    "AllocationId": "eipalloc-XXXXXX"
 }

The above will allocate the address and return it for you to use when associating it to your instance (“1.1.1.1” has been replaced with the Public IP Address). Now let’s associate it:

$ aws ec2 associate-address --instance-id i-XXXXXX --public-ip 1.1.1.1

Above we used the “associate-address” sub-command specifying the instance ID of our ami as well as the Public IP Address to associate.

To find out more of the address/interfaces associated with our instance, we can run the “describe-network-interfaces” sub-command.

$ aws ec2 describe-network-interfaces --output=table

With the above all in place and our instance up and running, we can connect to it using our private key created above:

$ ssh -i .ssh/test-keypair.pem centos@1.1.1.1

As you can see, it is really simple and quick way to deploy an instance. This is a very powerful tool which would allow you to script the process to instantiate many instances on demand programmatically without the need to use the AWS Console.

Next, let’s extend this process to see how easy it is to create an s3 bucket and upload some files….

S3 Bucket’s

Let’s say we have a directory on our machine called /home/user/s3-files and we want to sync it with an s3 bucket. We will leverage the “s3” cli command and create our s3-files bucket:

$ aws s3 mb s3://s3-files

The above command created an s3 bucket using the “mb” sub-command (make bucket) to create a new bucket. We could just as easily delete the newly created bucket using the “rb” sub-command (remove bucket):

$ aws s3 rb s3://s3-files

You can see that it is mimicing the POSIX style command interface. This is important as we can also perform other sub-commands like “ls” to list the contents of the bucket:

$ aws s3 ls

or

$ aws s3 ls s3://s3-files

We can copy files from our local file system to the s3 bucket just as easily:

$ aws s3 cp /home/user/s3-files/test-file.txt s3://s3-files/

We can do the above to copy a file or a a bunch of files using wildcards also:

$ aws s3 cp /home/user/s3-files/*.txt s3://s3-files/
upload: /home/user/s3-files/test.txt to s3://s3-files/test.txt

We could just as easily remove files using wildcards:

$ aws s3 rm ss3://s3-files/*.txt
delete: s3://s3-files/test.txt

A really useful sub-command is “sync”. This allows to synchronise a directory to an s3 bucket:

$ aws s3 sync /home/user/s3-files/ s3://s3-files

The above will scan for any changes locally and synchronise them to the s3 bucket.

Again, the s3 command is very useful in managing our s3 bucket’s allowing us to automate the task of bringing up buckets on demand as well as manipulating their content in a programmatic function without writing complex code.