Installing killbill.io on AWS Amazon Linux based EC2

Lately I was looking for an easy to use Subscription Management solution that can easily integrate with our Sugar CRM CE installation. I found an open source initiative named killbill quite interesting. After trying the demo version of the killbill.io, I decided to install in our lab hosted on AWS. I thought I would document the steps in a blog.

But before I write the steps, I must confess that I am not a Ruby expert. I have resolved some of the issues that I faced by searching google and taking some pointer from one of the killbill key contributors. If there are better ways to do this, please drop me a message. I’ll update the blog.

Here are the steps I followed to install the killbill.io on AWS Amazon Linux EC2:

Launch EC2 Instance

I am not going to detail the steps involved in launching an AWS EC2 instance. This is not the focus of the blog. However, you may follow the the link ‘Launch an AWS EC2 Instance‘ if need any further instructions in this regard. I will suggest r2.small or m3.medium instance to start with.

KIllbill application tries to open network connection on the local machine. Please verify if you can resolve the name returned by ‘hostname’ to verify.

ping `hostname`

If ping doesn’t resolve the hostname, you need to correct it by editing file /etc/sysconfig/network and adding an entry for the hostname in /etc/hosts.

Install Java

Java 7 is pre-installed on the current version of AWS Amazon Linux AMI. I kept the same version of Java.

Install Ruby

AWS EC2 Amazon Linux does contain Ruby 2.0, however the recommended version is Ruby 2.1+. I used rvm to update Ruby version.

Here are the commands that I ran to install rvm, Ruby, rails and io-consolue

#install rvm
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
url -sSL https://get.rvm.io | bash -s stable

#install ruby
rvm install ruby 2.2.2

#install rails
gem install rails

#install io-console
gem install io-console

Some of the commands may take longer time to install as it compiles it on the fly. You can use following commands to verify the ruby, rails etc.

[ec2-user@killbill etc]$ rvm -v
rvm 1.27.0 (latest) by Wayne E. Seguin <wayneeseguin@gmail.com>, Michal Papis <mpapis@gmail.com> [https://rvm.io/]
[ec2-user@killbill etc]$ ruby -v
ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
[ec2-user@killbill etc]$ rails -v
Rails 5.0.0.1

Install Killbill

Killbill installation is straightforward provided your prerequisites are done correctly. You need to install the killbill plugin manager. You can use kpm to install killbill.

Please follow these commands to install killbill in $HOME/killbill directory:

gem install kpm
mkdir killbill 
cd killbill 
kpm install

Configure MySQL DB

By default killbill uses h2 database. However, you have an option to integrate any other database that supports jdbc. I created an AWS MySQL RDS and configured the same in the killbill.

You will need to create killbill and kaui schema in the rds instance that you created earlier. You can download the KillBill DDL and KAUI DDL, and run the both DDL sqls in the RDS.

Please add following entries in $HOME/killbill/conf/catalina.properties at the bottom of the file:

# Kill Bill properties
org.killbill.dao.url=jdbc:mysql::3306/killbill
org.killbill.dao.user=XXX
org.killbill.dao.password=XXX
org.killbill.billing.osgi.dao.url=jdbc:mysql://XXX:3306/killbill
org.killbill.billing.osgi.dao.user= XXX
org.killbill.billing.osgi.dao.password= XXX

# Kaui properties
kaui.db.adapter=jdbcmysql
kaui.db.url=jdbc:mysql://XXX:3306/killbill
kaui.db.username= XXX
kaui.db.password= XXX

Run KILLBILL

Kill install provides a startup.sh script. You can run the same to start the killbill application. You can find the startup.sh script in $HOME/killbill/bin directory. I looked at $HOME/logs/catalina.out to troubleshoot the start up.

Once killbill is up and running, you may use the Killbill Admin UI to verify the installation. I created tenant, accounts, charges etc to verify the installations. Here is the path you can follow to open KAUI interface:

http://x.x.x.x:8080/kaui

AWS CLOUDFORMATION All Together – Part 2(VPC, NAT Gateway, Public and Private Subnet)

So here I am hope from my last blog AWS CLOUDFORMATION All Together – Part 1 you would have got basic idea of how to work with Cloud Formation. In this let create template for creating

  1. VPC
  2. NAT Gateway
  3. 2 Public Subnet
  4. 2 Private Subnet
  5. Other dependent Resources for VPC

NAT Gateway is new AWS Managed Service which replaces  NAT instances which had to be configured by ourself.

  1. Parameters

The Parameters to be passed are
Tag
VPC CIDR Range
Public Subnet-1 CIDR Range
Public Subnet-1 Availability Zone
Public Subnet-2 CIDR Range
Public Subnet-2 Availability Zone
Private Subnet-1 CIDR Range
Private Subnet-1 Availability Zone
Private Subnet-2 CIDR Range
Private Subnet-2 Availability Zone

Tip: Use of “AllowedValues” helps in selecting the value from a drop down list as in below screen-shot.
“TagValue2” : {
“Description” : “The Name of Environment”,
“Type” : “String”,
“AllowedValues” : [“Development”,”Staging”,”Production”]

Environment-tag

  1. Resources

In Resources section we will have the following components
VPC
Subnets
Internet Gateway
NAT Gateway
EIP for NAT Gateway
Route Table
Route Table Association
Network Acl
Subnet Network Acl Association
Private Subnets

Few important points worth considering.
Please note that you will likely have a Network ACL Association per subnet
subnet 1 to acl1
subnet 2 to acl2

 
"PrivateSubnetNetworkAclAssociation1":{  
   "Type":"AWS::EC2::SubnetNetworkAclAssociation",
   "Properties":{  
      "SubnetId":{  
         "Ref":"PrivateSubnet1"
      },
      "NetworkAclId":{  
         "Ref":"PrivateNetworkAcl"
      }
   }
},
"PrivateSubnetNetworkAclAssociation2":{  
   "Type":"AWS::EC2::SubnetNetworkAclAssociation",
   "Properties":{  
      "SubnetId":{  
         "Ref":"PrivateSubnet2"
      },
      "NetworkAclId":{  
         "Ref":"PrivateNetworkAcl"
      }
   }
},
Here we can not add both subnet in single block like 
"PrivateSubnetNetworkAclAssociation1":{  
   "Type":"AWS::EC2::SubnetNetworkAclAssociation",
   "Properties":{  
      "SubnetId":[  
         {  
            "Ref":"PrivateSubnet1"
         },
         {  
            "Ref":"PrivateSubnet2"
         }
      ]      "NetworkAclId":{  
         "Ref":"PrivateNetworkAcl"
      }
   }
},
Finally here is my template which will create VPC,
2Public Subnet,
2Private Subnet and NAT Gateway.{  
   "AWSTemplateFormatVersion":"2016-03-31",
   "Description":"This Template will create VPC, Subnet and resources needed for VPC  ",
   "Parameters":{  
      "TagValue1":{  
         "Description":"The Project Name",
         "Type":"String"
      },
      "TagValue2":{  
         "Description":"The Name of Environment",
         "Type":"String",
         "AllowedValues":[  
            "Development",
            "Staging",
            "Production"
         ]
      },
      "CIDR":{  
         "Description":"The IP address range that you'll use for your VPC",
         "Type":"String"
      }      "PublicCidrBlock1":{  
         "Description":"The IP address range for Public Subnet 1",
         "Type":"String"
      },
      "PublicSubnet1AZ":{  
         "Description":"The AZ for Public Subnet 1",
         "Type":"AWS::EC2::AvailabilityZone::Name"
      },
      "PublicCidrBlock2":{  
         "Description":"The IP address range for Public Subnet 2",
         "Type":"String"
      },
      "PublicSubnet2AZ":{  
         "Description":"The AZ for Public Subnet 2",
         "Type":"AWS::EC2::AvailabilityZone::Name"
      },
      "PrivateCidrBlock1":{  
         "Description":"The IP address range for Private Subnet 1",
         "Type":"String"
      },
      "PrivateSubnet1AZ":{  
         "Description":"The AZ for Private Subnet 1",
         "Type":"AWS::EC2::AvailabilityZone::Name"
      },
      "PrivateCidrBlock2":{  
         "Description":"The IP address range for Private Subnet 2",
         "Type":"String"
      },
      "PrivateSubnet2AZ":{  
         "Description":"The AZ Private Subnet 2",
         "Type":"AWS::EC2::AvailabilityZone::Name"
      }
   },
   "Resources":{  
      "VPC":{  
         "Type":"AWS::EC2::VPC",
         "Properties":{  
            "CidrBlock":{  
               "Ref":"CIDR"
            },
            "EnableDnsSupport":"true",
            "EnableDnsHostnames":"true",
            "InstanceTenancy":"default",
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "PublicSubnet1":{  
         "Type":"AWS::EC2::Subnet",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "CidrBlock":{  
               "Ref":"PublicCidrBlock1"
            },
            "MapPublicIpOnLaunch":"true",
            "AvailabilityZone":{  
               "Ref":"PublicSubnet1AZ"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "PublicSubnet2":{  
         "Type":"AWS::EC2::Subnet",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "CidrBlock":{  
               "Ref":"PublicCidrBlock2"
            },
            "MapPublicIpOnLaunch":"true",
            "AvailabilityZone":{  
               "Ref":"PublicSubnet2AZ"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "InternetGateway":{  
         "Type":"AWS::EC2::InternetGateway",
         "Properties":{  
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "AttachGateway":{  
         "Type":"AWS::EC2::VPCGatewayAttachment",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "InternetGatewayId":{  
               "Ref":"InternetGateway"
            }
         }
      },
      "NAT":{  
         "DependsOn":"AttachGateway",
         "Type":"AWS::EC2::NatGateway",
         "Properties":{  
            "AllocationId":{  
               "Fn::GetAtt":[  
                  "EIP",
                  "AllocationId"
               ]
            },
            "SubnetId":{  
               "Ref":"PublicSubnet1"
            }
         }
      },
      "EIP":{  
         "Type":"AWS::EC2::EIP",
         "Properties":{  
            "Domain":"vpc"
         }
      },
      "PublicRouteTable":{  
         "Type":"AWS::EC2::RouteTable",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               },
               {  
                  "Key":"Network",
                  "Value":"Public"
               }
            ]
         }
      },
      "PublicRoute":{  
         "Type":"AWS::EC2::Route",
         "DependsOn":"AttachGateway",
         "Properties":{  
            "RouteTableId":{  
               "Ref":"PublicRouteTable"
            },
            "DestinationCidrBlock":"0.0.0.0/0",
            "GatewayId":{  
               "Ref":"InternetGateway"
            }
         }
      },
      "PublicSubnetRouteTableAssociation1":{  
         "Type":"AWS::EC2::SubnetRouteTableAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PublicSubnet1"
            },
            "RouteTableId":{  
               "Ref":"PublicRouteTable"
            }
         }
      },
      "PublicSubnetRouteTableAssociation2":{  
         "Type":"AWS::EC2::SubnetRouteTableAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PublicSubnet2"
            },
            "RouteTableId":{  
               "Ref":"PublicRouteTable"
            }
         }
      },
      "PublicNetworkAcl":{  
         "Type":"AWS::EC2::NetworkAcl",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               },
               {  
                  "Key":"Network",
                  "Value":"Public"
               }
            ]
         }
      },
      "PublicSubnetNetworkAclAssociation1":{  
         "Type":"AWS::EC2::SubnetNetworkAclAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PublicSubnet1"
            },
            "NetworkAclId":{  
               "Ref":"PublicNetworkAcl"
            }
         }
      },
      "PublicSubnetNetworkAclAssociation2":{  
         "Type":"AWS::EC2::SubnetNetworkAclAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PublicSubnet2"
            },
            "NetworkAclId":{  
               "Ref":"PublicNetworkAcl"
            }
         }
      },
      "PrivateSubnet1":{  
         "Type":"AWS::EC2::Subnet",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "CidrBlock":{  
               "Ref":"PrivateCidrBlock1"
            },
            "AvailabilityZone":{  
               "Ref":"PrivateSubnet1AZ"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "PrivateSubnet2":{  
         "Type":"AWS::EC2::Subnet",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "CidrBlock":{  
               "Ref":"PrivateCidrBlock2"
            },
            "AvailabilityZone":{  
               "Ref":"PrivateSubnet2AZ"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               }
            ]
         }
      },
      "PrivateRouteTable":{  
         "Type":"AWS::EC2::RouteTable",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               },
               {  
                  "Key":"Network",
                  "Value":"Private"
               }
            ]
         }
      },
      "PrivateSubnetRouteTableAssociation1":{  
         "Type":"AWS::EC2::SubnetRouteTableAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PrivateSubnet1"
            },
            "RouteTableId":{  
               "Ref":"PrivateRouteTable"
            }
         }
      },
      "PrivateSubnetRouteTableAssociation2":{  
         "Type":"AWS::EC2::SubnetRouteTableAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PrivateSubnet2"
            },
            "RouteTableId":{  
               "Ref":"PrivateRouteTable"
            }
         }
      },
      "PrivateNATRouteTableAssociation":{  
         "Type":"AWS::EC2::Route",
         "Properties":{  
            "RouteTableId":{  
               "Ref":"PrivateRouteTable"
            },
            "DestinationCidrBlock":"0.0.0.0/0",
            "NatGatewayId":{  
               "Ref":"NAT"
            }
         }
      },
      "PrivateNetworkAcl":{  
         "Type":"AWS::EC2::NetworkAcl",
         "Properties":{  
            "VpcId":{  
               "Ref":"VPC"
            },
            "Tags":[  
               {  
                  "Key":"Project",
                  "Value":{  
                     "Ref":"TagValue1"
                  }
               },
               {  
                  "Key":"Environment",
                  "Value":{  
                     "Ref":"TagValue2"
                  }
               },
               {  
                  "Key":"Network",
                  "Value":"Private"
               }
            ]
         }
      },
      "PrivateSubnetNetworkAclAssociation1":{  
         "Type":"AWS::EC2::SubnetNetworkAclAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PrivateSubnet1"
            },
            "NetworkAclId":{  
               "Ref":"PrivateNetworkAcl"
            }
         }
      },
      "PrivateSubnetNetworkAclAssociation2":{  
         "Type":"AWS::EC2::SubnetNetworkAclAssociation",
         "Properties":{  
            "SubnetId":{  
               "Ref":"PrivateSubnet2"
            },
            "NetworkAclId":{  
               "Ref":"PrivateNetworkAcl"
            }
         }
      },
      "NetworkAclEntry1":{  
         "Type":"AWS::EC2::NetworkAclEntry",
         "Properties":{  
            "CidrBlock":"0.0.0.0/0",
            "Egress":"true",
            "Protocol":"-1",
            "RuleAction":"allow",
            "RuleNumber":"100",
            "NetworkAclId":{  
               "Ref":"PublicNetworkAcl"
            }
         }
      },
      "NetworkAclEntry2":{  
         "Type":"AWS::EC2::NetworkAclEntry",
         "Properties":{  
            "CidrBlock":"0.0.0.0/0",
            "Protocol":"-1",
            "RuleAction":"allow",
            "RuleNumber":"100",
            "NetworkAclId":{  
               "Ref":"PublicNetworkAcl"
            }
         }
      },
      "NetworkAclEntry3":{  
         "Type":"AWS::EC2::NetworkAclEntry",
         "Properties":{  
            "CidrBlock":"0.0.0.0/0",
            "Egress":"true",
            "Protocol":"-1",
            "RuleAction":"allow",
            "RuleNumber":"100",
            "NetworkAclId":{  
               "Ref":"PrivateNetworkAcl"
            }
         }
      },
      "NetworkAclEntry4":{  
         "Type":"AWS::EC2::NetworkAclEntry",
         "Properties":{  
            "CidrBlock":"0.0.0.0/0",
            "Protocol":"-1",
            "RuleAction":"allow",
            "RuleNumber":"100",
            "NetworkAclId":{  
               "Ref":"PrivateNetworkAcl"
            }
         }
      }
   },
   "Outputs":{  
      "VPCId":{  
         "Description":"VPCId of the newly created VPC",
         "Value":          {  
            "Ref":"VPC"
         }
      },
      "PublicSubnet1Id":{  
         "Description":"SubnetId of the public subnet",
         "Value":          {  
            "Ref":"PublicSubnet1"
         }
      },
      "PublicSubnet2Id":{  
         "Description":"SubnetId of the public subnet",
         "Value":          {  
            "Ref":"PublicSubnet2"
         }
      },
      "PrivateSubnet1Id":{  
         "Description":"SubnetId of the public subnet",
         "Value":          {  
            "Ref":"PrivateSubnet1"
         }
      },
      "PrivateSubnet2Id":{  
         "Description":"SubnetId of the public subnet",
         "Value":          {  
            "Ref":"PrivateSubnet2"
         }
      }
   }
}

From CLI:

 
aws cloudformation create-stack –stack-name MY-FIRST-VPC –template-body file:///file-path.json –parameters ParameterKey=CIDR,ParameterValue=10.0.0.0/16 ParameterKey=PrivateCidrBlock1,ParameterValue=10.0.1.0/24 ParameterKey=PrivateCidrBlock2,ParameterValue=10.0.2.0/24 ParameterKey=PrivateSubnet1AZ,ParameterValue=us-east-1b ParameterKey=PrivateSubnet2AZ,ParameterValue=us-east-1c ParameterKey=PublicCidrBlock1,ParameterValue=10.0.3.0/24 ParameterKey=PublicCidrBlock2,ParameterValue=10.0.4.0/24 ParameterKey=PublicSubnet1AZ,ParameterValue=us-east-1b ParameterKey=PublicSubnet2AZ,ParameterValue=us-east-1c ParameterKey=TagValue1,ParameterValue=MyProject ParameterKey=TagValue2,ParameterValue=Development
 

AWS CLOUDFORMATION All Together – Part 1

AWS Cloud Formation has so many reference on Internet and many of them are very helpful but what I found while going through them is that you have to go through 100s of them to consolidate a single template. So finally decided to put most of things here.
 
Few important points worth considering while working on complex templates
 
1) What I learned is that do not try to create complete Stack from a single template because you will end up with a template with 2k lines or more and if the template has some error then it becomes hard to sort it out. So, it’s better to divide the Stack in Logical Components then create Cloud Formation Template for them.
eg: VPC , Subnets and NAT Gateway
DB Subnet Group, RDS, Replica and CloudWatch Alarms,
 
2) Tags are very important as it helps in filtering the resources for Costs and other
 
3) Magic of Parameters : Always try to pass variables from Parameters. This helps to make the template Universal.
 
For start let’s create a simple template to launch an EC2 Instance.
 
The template will be divided in 3 Parts.

  1. Parameters
  2. Resources
  3. Output
  1. Parameters

The Values which you need to give while launching an Instance like VPC ID, AMI, Role, Subnet, Keypair name etc. Keeping these values in Parameters help in making the template universal.

Tip: Always create your AWS Resources with proper tags. Leaving the tagging part for later is time consuming activity.
 
Types:-
“VPC” : {
“Description” : “The VPC in which you want to Launch your EC2”,
Type” : “AWS::EC2::VPC::Id
}

In Parameters Type, always specify “Existing AWS values” if present.

Existing AWS values that are in the template user’s account. You can specify the following AWS-specific types: Few examples are below.
 
AWS::EC2::AvailabilityZone::Name
An Availability Zone, such as us-west-2a.
AWS::EC2::Image::Id
An Amazon EC2 image ID, such as ami-ff527ecf. Note that the AWS CloudFormation console won’t show a drop-down list of values for this parameter type.
AWS::EC2::Instance::Id
An Amazon EC2 instance ID, such as i-1e731a32.
AWS::EC2::KeyPair::KeyName
An Amazon EC2 key pair name.
AWS::EC2::SecurityGroup::GroupName
An EC2-Classic or default VPC security group name, such as my-sg-abc.
AWS::EC2::SecurityGroup::Id
A security group ID, such as sg-a123fd85.
AWS::EC2::Subnet::Id
A subnet ID, such as subnet-123a351e.
AWS::EC2::Volume::Id
An Amazon EBS volume ID, such as vol-3cdd3f56.
AWS::EC2::VPC::Id
A VPC ID, such as vpc-a123baa3.
AWS::Route53::HostedZone::Id
An Amazon Route 53 hosted zone ID, such as Z23YXV4OVPL04A.

  1. Resources

The AWS Components which we want to create from the template. So here I am creating EC2 Security Group and EC2 Instance.

  1. Outputs

This the part where you can get the Ids or AWS Resources Values which are created from the template.

So What’s the use of this OUTPUTS? Output does not seems to be a important in simple stack creation but if use a Nested Template where 2nd Resource is dependent on the values from 1st Resource then this comes in picture.
 
Finally here is my template which will create EC2 Security Group and Launch EC2 Instance.

{ 
“AWSTemplateFormatVersion”:“2016-03-28”,
“Description”:“This Template will create EC2 INSTANCE and Security group”,
“Parameters”:{  
  “TagValue1”:{  
    “Description”:“The Project Name “,
    “Type”:“String”
  },
  “TagValue2”:{  
    “Description”:“The Environment name”,
    “Type”:“String”,
    “AllowedValues”:[  
      “Development”,
      ”Staging”,
      ”Production”
    ]
  },
  “TagValue3”:{  
    “Description”:“The EC2 Instance Name”,
    “Type”:“String”
  },
  “TagValue4”:{  
    “Description”:“The Server Name”,
    “Type”:“String”
  },
  “VPC”:{  
    “Description”:“The VPC in which you want to Launch your EC2”,
    “Type”:    “AWS::    EC2::    VPC::Id”
  },
  “AMI”:{  
    “Description”:“The AMI that you’ll use for your EC2”,
    “Type”:    “AWS::    EC2::    Image::Id”
  },
  “IAMROLE”:{  
    “Description”:“The IAM you’ll use for your EC2”,
    “Type”:“String”
  },
  “Subnet”:{  
    “Description”:“The Subnet that you’ll use for your EC2”,
    “Type”:    “AWS::    EC2::    Subnet::Id”
  },
  “KeyPairName”:{  
    “Description”:“Name of an existing Amazon EC2 KeyPair for SSH access to the Web Server”,
    “Type”:    “AWS::    EC2::    KeyPair::KeyName”,
    “Default”:“my-key”
  },
  “InstanceClass”:{  
    “Description”:“EC2 instance type”,
    “Type”:“String”,
    “Default”:“t2.micro”,
    “AllowedValues”:[  
      “t2.micro”,
      ”t2.medium”,
      ”t2.small”,
      ”t2.large”,
      ”m4.large”,
      ”m4.xlarge”,
      ”m4.2xlarge”,
      “m4.4xlarge”,
      ”m4.10xlarge”,
      ”m3.medium”,
      ”m3.large”,
      ”m3.xlarge”,
      ”m3.2xlarge”,
      “c4.large”,
      ”c4.xlarge”,
      ”c4.2xlarge”,
      ”c4.4xlarge”,
      ”c4.8xlarge”,
      ”c3.large”,
      “c3.xlarge”,
      ”c3.2xlarge”,
      ”c3.4xlarge”,
      ”c3.8xlarge”
    ],
    “ConstraintDescription”:“must be a valid EC2 instance type.”
  }
},
“Resources”:{  
  “EC2SecurityGroup”:{  
    “Type”:    “AWS::    EC2::SecurityGroup”,
    “Properties”:{  
      “GroupDescription”:“SecurityGroup”,
      “VpcId”:{  
        “Ref”:“VPC”
      },
      “SecurityGroupIngress”:[  
        {  
          “IpProtocol”:“tcp”,
          “FromPort”:“22”,
          “ToPort”:“22”,
          “CidrIp”:“0.0.0.0/32”
        },
        {  
          “IpProtocol”:“tcp”,
          “FromPort”:“80”,
          “ToPort”:“80”,
          “CidrIp”:“0.0.0.0/32”
        },
        {  
          “IpProtocol”:“tcp”,
          “FromPort”:“443”,
          “ToPort”:“443”,
          “CidrIp”:“0.0.0.0/32”
        }
      ]
    }
  },
  “Ec2Instance”:{  
    “Type”:    “AWS::    EC2::Instance”,
    “Properties”:{  
      “ImageId”:{  
        “Ref”:“AMI”
      },
      “InstanceType”:{  
        “Ref”:“InstanceClass”
      },
      “IamInstanceProfile”:{  
        “Ref”:“IAMROLE”
      },
      “KeyName”:{  
        “Ref”:“KeyPairName”
      },
      “SecurityGroupIds”:[  
        {  
          “Ref”:“EC2SecurityGroup”
        }
      ],
      “SubnetId”:{  
        “Ref”:“Subnet”
      },
      “Tags”:[  
        {  
          “Key”:“Project”,
          “Value”:{  
            “Ref”:“TagValue1”
          }
        },
        {  
          “Key”:“Environment”,
          “Value”:{  
            “Ref”:“TagValue2”
          }
        },
        {  
          “Key”:“Name”,
          “Value”:{  
            “Ref”:“TagValue3”
          }
        },
        {  
          “Key”:“Server”,
          “Value”:{  
            “Ref”:“TagValue4”
          }
        }
      ],
      “Tenancy”:“default”
    }
  }
},
“Outputs”:{  
  “InstanceId”:{  
    “Description”:“InstanceId of the newly created EC2 instance”,
    “Value”:{  
      “Ref”:“Ec2Instance”
    }
  },
  “AZ”:{  
    “Description”:“Availability Zone of the newly created EC2 instance”,
    “Value”:{  
      “Fn::      GetAtt”:[  
        “Ec2Instance”,
        “AvailabilityZone”
      ]
    }
  },
  “PublicIP”:{  
    “Description”:“Public IP address of the newly created EC2 instance”,
    “Value”:{  
      “Fn::      GetAtt”:[  
        “Ec2Instance”,
        “PublicIp”
      ]
    }
  },
  “PrivateIP”:{  
    “Description”:“Private IP address of the newly created EC2 instance”,
    “Value”:{  
      “Fn::      GetAtt”:[  
        “Ec2Instance”,
        “PrivateIp”
      ]
    }
  }
}
}

From CLI:

 
aws cloudformation create-stack --stack-name MY-FIRST-STACK --template-body file:///file-path.json --parameters ParameterKey=AMI,ParameterValue=ami-xxx ParameterKey=IAMROLE,ParameterValue=my-role ParameterKey=InstanceClass,ParameterValue=t2.micro ParameterKey=KeyPairName,ParameterValue=my-key ParameterKey=Subnet,ParameterValue=sunrt-xxxxx ParameterKey=VPC,ParameterValue=vpc-xxxx ParameterKey=TagValue1,ParameterValue=MyProject ParameterKey=TagValue2,ParameterValue=Developmet ParameterKey=TagValue3,ParameterValue=MyEc2 ParameterKey=TagValue4,ParameterValue=WebServer

Free SSL/TLS Certificates for applications hosted on AWS EC2

All About AWS Certificate Manager-ACM
Setup SSL/TLS certificates using AWS ACM

Here comes the good news, AWS has recently launched Certificate Manager (ACM) service designed to protect and manage the private keys used with SSL/TLS certificates for free.

SSL and TLS, are industry-standard protocols for encrypting network communications. They provide encryption for sensitive data in transit and authentication using SSL/TLS certificates to establish the identity of a site and establish secured connection between browsers and applications.

In general it’s a time-consuming manual process to purchase, upload and renew those certificates. AWS Certificate Manager simplifies this complex process of generating, uploading and renewing certificates. And this is achieved thru a simple click process, no need to generate a certificate signing request (CSR), submit a CSR to a Certificate Authority, or upload and install the certificate once received. AWS Certificate Manager takes care of deploying certificates, and handles all certificate renewals. Amazingly this service is absolutely free, you just need to pay for underlying infrastructure.

ACM Certificates are domain validated. That is, the subject field of an ACM Certificate identifies a domain name and nothing more. Email is sent to the registered owner for each domain name in the request.

Easy steps to setup secured web application using ACM generated certificates are given below:

1. Get a domain name for your web application.
2. You need to configure a Load Balancer for your application running on AWS instance. At present ACM cetificates can only be used with Elastic Load Balancer OR CloudFront.
3. Configure a Route53 Hosted zone for your domain. By default you get NS and SOA type recordset, you need to add one more canonical name type. Provide your Load Balancer’s DNS Name as Value.
4. If you have procured domain name from somewhere outside AWS, you need to link route53 namespaces in your domain settings.
For example if you have taken domain from godaddy, navigate to domain settings->name servers->manage, select setup type as custom and add nameserver values (record set type NS). Note: it may take few hours to make nameserver links effective after you add in your domain settings.
5. Once domain configuration is complete, you can login to ACM service and Request a Certificate. On request submission a mail is sent to the registered owner for each domain name in the request. The domain owner or an authorized representative can approve the certificate request by following the instructions in the email. Status of certificate changes to Issued after completing instructions which indicates that certificate is ready to be linked with ELB for CloudFront.
6. You can link issued certificate under Listeners tab of load balancer. Select “an existing certificate from AWS Certificate Manager (ACM)”, all issued certificates will be available to be linked with load balancer.

Other important facts about the ACM service:

• It provides automatic renewal which help you avoid downtime due to misconfigured, revoked, or expired certificates.
• ACM Certificates are trusted by all major browsers including Google Chrome, Microsoft Internet Explorer, Mozilla Firefox, and Apple Safari. ACM Certificates are trusted by Java.
• ACM allows you to use an asterisk (*) in the domain name to create an ACM Certificate containing a wildcard name that can protect several sites in the same domain. For example, *.vikrant.com protects www.vikrant.com and images.vikrant.com.
• ACM supports the RSA-2048 encryption and SHA-256 hashing algorithms.

Though ACM has few limitations like currently ACM certificates can be used only with Elastic Load Balancer OR CloudFront and you cannot use ACM Certificates outside of AWS, still its extremely handy for startups and developers to secure their web applications for no extra cost without relying on system admins.

Note: Currently ACM service is only available in N Virginia Region.

How to work with AWS Cloudsearch PHP SDK

Blog also covers
AWS Cloudsearch basic operations with PHP SDK
AWS Cloudsearch Drawbacks
AWS Cloudsearch Query
AWS Cloudsearch Suggester

 

Cloudsearch is an AWS service which helps in searching large collections of data such as documents, web pages, posts etc.
While integrating cloud search for one of my project, I found there is very limited information available on how to use cloudsearch php sdk. Most of my queries on SDK were not answered convincingly even by AWS engineers. Almost all the examples given in AWS docs and elsewhere explain examples using URL based search which may
not be apt for every scenario/project.
Through this blog I want to give simple snippets which can be referred by someone who’s struggling with php sdk. I will also try to list down few issues which a developer may face with AWS Cloudsearch.

Primary tasks with any search engines are- How to upload documents, how to search and how to have suggestions (auto completions).
Below are simple snippets to work with cloud search:
Upload document:
$CSclient = CloudSearchDomainClient::factory(array(
‘credentials’ => array(
‘key’ => ‘YOUR KEY’,
‘secret’ => ‘YOUR SECRET KEY’,
),
‘endpoint’ => ‘YOUR END POINT’,
));

$parameter = array(array(
‘type’ => ‘add’,
‘id’ => ‘a0687322-5d77-2411-cfc2232d-54e618378373’, /// if you don’t pass id, cloud search auto generates
‘fields’ => array(
‘field1′ => ‘ABC’,
‘field2′ => ‘XYZ’,
‘field3_integer’ => ‘0’, // integer
‘field4_date’ => ‘2015-06-12T00:00:00Z’, //date format…
),) ,// you can add more fields as per your domain
);
$json = json_encode($parameter);
print_r ($json);
$response = $CSclient -> uploadDocuments(
array(
‘documents’ => $json,
‘contentType’ => ‘application/json’
)
);
if ($response -> get(‘status’) === ‘success’) {
echo $response -> get(‘adds’) . “\r\n”;
} else {
};
//print_r ($result);
?>
Search document: ( have added clause for range and filter conditions, taken property search domain example)
$CSclient-> as above
$result = $CSclient->search(array(
‘query’ => ‘YOUR TEXT TO BE QUERIED’,
‘return’=> ‘property_name,property_city,property_locationname,property_area,property_bedroom,property_bathroom,price,property_for,property_image,property_address’, // fields to be queried
‘queryOptions’=> ‘{fields:[\’property_address\’,\’property_name\’, \’property_type\’, \’property_for\’, \’description\’, \’property_city\’, \’property_landmark\’, \’property_locationname\’, \’property_state\’, \’property_type\’, \’property_bedroom\’]}’, //fields to be considered for search
‘filterQuery’=> ‘deleted:0’, // query criteria- where deleted =0
‘filterQuery’=> ‘price:[\’30000000\’,\’50000000\’]’ /// query criteria price range
));
$hitCount = $result->getPath(‘hits/found’);
echo “Number of Hits: {$hitCount}\n”;
print_r($result);
//print_r ($result);
//var_dump($result, $result = null);
?>
Suggest document:
$CSclient -> as above
$result = $CSclient->suggest(array(‘query’ => ’YOUR TEXT TO BE AUTO COMPLETED’, ‘suggester’ => ‘YOUR SUGGESTER NAME’, ‘size’ => 150));// size -> max count
//$hitCount = $result->getPath(‘hits/found’);
//echo “Number of Hits: {$hitCount}\n”;
print_r ($result);
?>
===============

With all the advantages and ease cloudsearch provides to developers, i found there are certain issues worth documenting which you may hit if its used extensively.

1) ‘Small’ is the smallest instance type you can have for a cloud search domain. This increases monthly domain cost.
2) Once domain is launched, there is no option to stop or pause to reduce billing.
3) Every insert in charged which may result in high cost if you have 2,3 test/prod environments.
4) Cloud search tokenises text based on spaces in between or commas which may result in some unexpected results if you are trying to search a text with spaces or special characters.
5) There is no easy export feature to take data out from cloudsearch.
6) Number of columns in cloud search are pre configured (has schema) unlike elastic search. Having fixed schema may not be suitable if number of fields are not fixed for certain domain.

Scheduling AWS Tasks using Data Pipeline Service

===============================================

Topic also covers…

Scheduling AWS Tasks using AWS Data Pipeline Service

Scheduling EC2 Instances without using thrird party software

Using AWS Data Pipeline to Schedule tasks

How to schedule CLI commands and Custom Scripts in AWS

===============================================

With the increase in AWS Services demand, the necessity to manage AWS recourses in more efficient and economical way is also increasing.

This Blog is intended to explain how AWS data pipeline service can be used to automate tasks which can result in cost optimization.

Am using an example where EC2 instances will be stopped at a certain time and restarted again with the help of data pipeline. This can be implemented to reduce EC2 instance cost by bringing down instances during non working hours. Similar automations can be done to achieve other optimizations. You can also execute your own custom script with the help of data pipeline. e.g if you wish to take periodic backups (Images), you can write a CLI command and schedule job in your preferred time.

Note: There are many third party tools available for performing similar tasks but for those solutions you need to either share credentials or you need to have an instance running which takes care of scheduling. In case of Data Pipeline, an instance starts only when task is scheduled and terminates instantly after execution.

Below are the steps given for Data Pipeline configuration:

  1. Login to AWS console and go to Data Pipeline Service.
  2. Click on Create new Pipeline
    • Under Source select Command Line Interface (CLI)
    • Provide valid CLI command Under Parameters. e.g to Stop few instances I used below command:

aws ec2 stop-instances –instance-ids i-id1 i-id2 –region                         <<your region>>

                      Note you can add more instance IDs, change region as per your                          Instance region

    • Select execution time, and logs location.

Similar steps can be followed for Scheduling Instance start. Just command given in step 4 will be different and shown below:

aws ec2 start-instances –instance-ids i-id1 i-id2 –region <<your region>>

Any complex command can be scheduled following above steps.

Below is a command to Stop all instances running under an account in a particular region. Instead of hardcoding instance ids am querying all running instances and stopping them.

aws ec2 describe-instances –region <<your region>> –filter Name=instance-state-name,Values=running –query ‘Reservations[].Instances[].InstanceId’ –output text | xargs aws ec2 stop-instances –instance-ids –region <<your region>>

3) You can give Role as DataPipelineDefaultResourceRole. One important point to consider is to check the default policy attached with this Role in IAM. There are chances that the policy attached doesn’t include the rights to perform action you wish via CLI. e.g By default ec2 stop instance will not work. In this case you can create a custom policy with required rights and attach to DataPipelineDefaultResourceRole role.

Sample policy:

{
     "Version": "2012-10-17",
     "Statement": [
          {
               "Effect": "Allow",
               "Action": [
                    "s3:*",
                    "ec2:Describe*",
                    "ec2:Start*",
                    "ec2:RunInstances",
                    "ec2:Stop*",
                    "datapipeline:*",
                    "cloudwatch:*"
               ],
               "Resource": [
                    "*"
               ]
          }
     ]
}

Conclusions:

  • Data Pipeline can be used to automated CLI commands or custom scripts.
  • Its cost effective as scheduler start and termination is handled by AWS. An instance starts for execution only at scheduled time and terminates post execution.
  • Its secure compared to other third party scheduler tools which force to upload credentials.

 

Automating Transcoding using AWS service (Elastic Transcoder , Lambda, S3 notifications)

This blog also also covers:

Sample Lambda function

Integrating S3 event notification with Lambda function.

Creating Elastic Transcoder job  using Lamdba Function..

Elastic Transcoder is one of the very interesting  AWS service which is extremely easy to use via console. However, when it comes to transcoding automation for the media files uploaded on S3, it turns out to be slightly complex task. I tried a simple solution using combination of more that one AWS service and it worked perfectly for me.

In this blog I will be explaining steps to automate transcoding. I have used lambda function to create a transcoder job. A S3 event notification generated on every object creation on a particular bucket will be invoking lambda function.

In case you are not familiar with AWS lambda and S3 notifications, I would suggest to go through AWS documentation to get a basic understanding before proceeding.

Below are the steps to be followed:

1> Identify source and destination directories/buckets. In my example, I have used two different buckets. One where input media is received and another where transcoded files are moved.

2> Configure a pipeline under Elastic Transcoder Service. Important fields are Input and output bucket. You can select Elatic transcoder default role under IAM Roles.

3> Create a Lambda function to create a Elastic transcoder job for the pipeline configured under step2.

Below is the sample code which I used, parameters to be changed as per your configuration are marked as “vik-change”

 
var AWS = require(‘aws - sdk’);
var s3 = new AWS.S3({
    apiVersion: ‘2006 - 03 - 01’
});

var eltr = new AWS.ElasticTranscoder({
    apiVersion: ‘2012 - 09 - 25’,
    region: ‘Your - Region’
});

var pipelineId = ‘Your - pipeline ID’;
var webPreset = ‘Your - webPreset’;
exports.handler = function(event, context) {
    var bucket = event.Records[0].s3.bucket.name;
    var key = event.Records[0].s3.object.key;
    s3.getObject({
            Bucket: bucket,
            Key: key
        },
        function(err, data) {
            console.log(‘err::: ’+err);
            console.log(‘data::: ’+data);
            if (err) {
                console.log(‘error getting object‘ + key + ‘from bucket‘ + bucket +
                    ‘.Make sure they exist and your bucket is in the same region as this
                    function.’);
                context.done(‘ERROR’, ‘error getting file’ + err);
            } else {
                console.log(‘Reached B’);
                /* Below section can be used if you want to put any check based on metadata

                if (data.Metadata.Content-Type == ‘video/x-msvideo’) {
                console.log(‘Reached C’ );
                console.log(‘Found new video: ‘ + key + ‘, sending to ET’);
                sendVideoToET(key);
                } else {
                console.log(‘Reached D’ );
                console.log(‘Upload ‘ + key + ‘was not video’);
                console.log(JSON.stringify(data.Metadata));
                }
                */
                sendVideoToET(key);
            }
        }
    );
};
function sendVideoToET(key) {
    console.log(‘Sending‘ + key + ‘to ET’);
    var params = {
        PipelineId: pipelineId,
        OutputKeyPrefix: ‘Your - Prefix’,
        Input: {
            Key: key,
            FrameRate: ‘auto’,
            Resolution: ‘auto’,
            AspectRatio: ‘auto’,
            Interlaced: ‘auto’,
            Container: ‘auto’
        },

        Output: {
            Key: ‘Output file’ s key name’,
            ThumbnailPattern: ‘Output file’ s Thumbnail name’,
            PresetId: webPreset,
            Rotate: ‘auto’
        }
    };

    eltr.createJob(params, function(err, data) {

        if (err) {
            console.log(‘Failed to send new video‘ + key + ‘to ET’);
            console.log(err);
            console.log(err.stack)
        } else {
            console.log(‘Error’);
            console.log(data);
        }
        //context.done(null,”);
    });
}

Under lambda Role, select default lambda_exec_role.

Don’t forget to provide Transcoder resource access to lambda_exec_role via IAM module, else even after correct configuration, lambda function will not be able to create job because of insufficient access.

4> Under S3 source bucket(where input media files will be received), go to events section on right hand side. Create an event notification, for desired events. Under Send to option, select Lambda function. On selection of Lamdba Function, you will be asked to provide two more inputs i.e. function ARN and invocation role.

Provide the ARN of the lambda function configured in Step 3. For invocation role, lambda_invoke_role should be selected.

Save the event notification. And now you are ready to test automated transcoding of media files using AWS.

Enjoy transcoding..