[![PyPI](https://img.shields.io/pypi/v/aws-scripts.svg)](https://pypi.org/project/aws-scripts/)
[![license](https://img.shields.io/github/license/mashape/apistatus.svg)](https://opensource.org/licenses/MIT)
aws-scripts
===========
Here you will find some useful AWS scripts I use from time to time.
All the scripts relies on [Boto](http://aws.amazon.com/sdkforpython/), a Python package that provides interfaces to Amazon Web Services.
So, to use these scripts, you need to install Boto and provide your AWS credentinals:
To install aws-scripts and all the required Python packages just type:
```
pip install aws-scripts
```
If dependencies are already satisfied, nothing will be installed.
To provide your AWS credentials use the boto/boto3 config file `~/.aws/credentials`:
``` ini
[default]
aws_access_key_id = <XXXXXXXXXXXXXXXXXXX>
aws_secret_access_key = <xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
region=xx-xxxx-x
```
> Note that you can use the environment variable: ```AWS_DEFAULT_REGION=xx-xxxx-x``` to override the default region on the config file.
> In the ec2-instances.py script you can also use the ```--region``` option for the same purpose
ec2-instances.py
----------------
Lists the EC2 instances including the Name Tag, IP, type, zone, vpc, ID and the status.
You can filter the result by name, type and/or status. Or you can provide a list of instance IDs instead.
Finally you can execute remote commands on all the instances returned by the filter or the list.
The '-h' option shows you how to use the available options.
```
usage: ec2-instances.py [-h] [-n NAME] [-t TYPE] [-s STATUS]
[-l ID_LIST [ID_LIST ...]] [-e EXECUTE] [-r REGION]
[-u USER]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Filter result by name.
-t TYPE, --type TYPE Filer result by type.
-s STATUS, --status STATUS
Filter result by status.
-l ID_LIST [ID_LIST ...], --id_list ID_LIST [ID_LIST ...]
Do not filter the result. Provide a InstanceIds list instead.
-i IGNORE, --ignore IGNORE
Do not show hosts lines containing the "IGNORE"
pattern in the tag Name
-e EXECUTE, --execute EXECUTE
Execute a command on instances
-r REGION, --region REGION
Specify an alternate region to override the one
defined in the .aws/credentials file
-u USER, --user USER User to run commands if -e option is used. Ubuntu user
is used by default
```
ec2-reserved.py
----------------
Lists details of all your Instance Reservations, including a summary of the active reservations by type and size.
The summary also shows your reserved active capacity after apply the normalization factor. This is useful to compare the reserved capacity with the deployed in production.
You can also use the option `--create-google-calendar-events` to add the expiration date of the active reservations in your Google Calendar Account.
```
usage: ec2-reserved.py [-h]
[-s {payment-pending,active,payment-failed,retired}]
[--create-google-calendar-events] [-t TYPE]
Show reserved EC2 instances
optional arguments:
-h, --help show this help message and exit
-s {payment-pending,active,payment-failed,retired}, --state {payment-pending,active,payment-failed,retired}
Filer result by reservation state.
--create-google-calendar-events
Create events in your Google Calendar, using the
expiration dates of your active reservations
-t TYPE, --type TYPE Filer result by instance type.
```
To use the Google calendar feature you just have to [enable the calendar API in your Google Account](https://console.developers.google.com) and create a calendar called aws in the [Google Calendar](http://calendar.google.com/). Then create the *OAuth client ID* credentials. Download the credentials file and save it as `client_secret.json` in the aws-scripts folder repo. When you run the script using the `--create-google-calendar-events` option for the first time, a web browser will be opened asking your to login with the Google account you want to use.
Then, whenever you buy new reservations on Amazon Web Services, you can add the new reservations in your calendar by just running the script.
ec2-ebs.py
----------
Lists the EC2 EBS volumes including the Name Tag, size, device, ID, attached instance ID, Attached instance Tag Name, type, IOPS, zone and status.
You can filter the result by type, status and Tag name.
The '-h' option shows you how to use the available options.
```
usage: ec2-ebs.py [-h] [-n NAME] [-t {gp2,io1,st1,sc1,standard}] [-s {creating,available,in-use,deleting,deleted,error}]
List all the Elastic Block Storage volumes
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Filter result by name.
-t {gp2,io1,st1,sc1,standard}, --type {gp2,io1,st1,sc1,standard}
Filer result by type.
-s {creating,available,in-use,deleting,deleted,error}, --status {creating,available,in-use,deleting,deleted,error}
Filter result by status.
```
ec2-elb.py
----------
Lists all your Elastic Load Balancers and his related instances.
```
usage: ec2-elb.py [-h]
For every Elastic Load Balancer list the attached instances
optional arguments:
-h, --help show this help message and exit
```
ec2-snap-mgmt.py
----------------
With this script you can see the relationships between your snapshots and your EBS volumes and AMIs. This allows you to choose the snapshots you don't need to keep in the AWS S3 service.
By default the script shows all the volumes and AMIs related to each snapshost.
You you can also show all the snapshots related with each volume. This option is specially usefull when you only want to keep a certain number of snapshots per volume.
Finally, you can show all the snapshots related with each AMI.
The '-h' option shows you how to use the available options.
```
usage: ec2-snap-mgmt.py [-h] [-v {orphan,volumes}] owner_id
positional arguments:
owner_id 12-digit AWS Account Number
optional arguments:
-h, --help show this help message and exit
-v {orphan,volumes,images}, --view {orphan,volumes,images}
Available views: orphan and volumes. Orphan is the
default one.
```
The script doesn't delete anything actually, just shows you the relationship in a tree view.
mongodb-backup.py
-----------------
This is a tool to make MongoDB backups on Amazon.
Two methods are supported: dump and snapshot.
- For the first one It uses `mongodump` to perform a binary backup of your local or remote MongoDB instance. The dumped files are compressed in a tarball file and uploaded to a Amazon S3 bucket.
- For the snapshot method, you can provide the data and / or the journal volumes and the script automatically will lock the database and will suspend all the writes during the backup process to ensure the consistency of the backup if required.
For the dump method, you can specify the number of copies to retain in the bucket or in the EC2 snapshot area. The oldest ones will be automatically removed.
```
usage: mongodb-backup.py [-h] [-m {dump,snapshot}] [-u USER] [-p PASSWORD] [-H HOST] [-d DATABASE] [-e EXCLUDE_COLLECTION] [-o OUT] [-n NUMBER] [-b BUCKET] [-P PREFIX] [-v VOLUME_ID [VOLUME_ID ...]]
[--no_journal] [-r REGION]
A tool to make mongodb backups on Amazon
optional arguments:
-h, --help show this help message and exit
-m {dump,snapshot}, --method {dump,snapshot}
Backup method. Dump if none is provided
-u USER, --user USER Mongodb user (optional)
-p PASSWORD, --