Create the NSOT Application, Bastion Host, and ELB modules

In this final article for the series, we will break down the terraform code used to create the NSOT application instances, Bastion host instances for management, and frontend Elastic load balancer fronting the NSOTapplication servers.

The NSOT module uses a service we have not discussed yet referred to as an ASG or Autoscaling group. An Auto Scaling group contains a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. For example, if a single application operates across multiple instances, you might want to increase the number of instances in that group to improve the performance of the application, or decrease the number of instances to reduce costs when demand is low.

You can use the Auto Scaling group to scale the number of instances automatically based on criteria that you specify, or maintain a fixed number of instances even if an instance becomes unhealthy. This automatic scaling and maintaining the number of instances in an Auto Scaling group is the core functionality of the Amazon EC2 Auto Scaling service.  In our case, we will be passing parameters using the code below to the NSOT module to keep a minimum of two NSOTapplication servers running at all times across two availability zones. Also, we will be passing a config template to the ASG to apply the necessary base configuration to the NSOT instances such as installing docker on the instances, pulling down the NOST base docker container, and passing custom params to the docker container.

First, we will be using the terraform data template_fileresource to pass variables to the config init script, which is then rendered in the launch configuration resource to add the base config to the NSOT instances. Let’s take a look at the template resource first and then we will dive into the init.tpl (script file):

Modules/nsot-frontend/main.tf

Next, let’s break down what’s going on in the launch configuration resource and autoscaling group resource code:

File: Init.tpl

That’s it! Now you have a redundant NSOT app front end running and using the MySQL RDS instance from the previous article. Let’s quickly review the output variables which will be used in the ELB module.

/modules/nsot-frontend/outputs.tf

Now that we have a working pair of NSOT application instances, let’s place them behind an AWS Elastic Load Balancer limiting the exposure of our App frontend and allowing us to balance user traffic across the instances.

/main.tf

/modules/elb/main.tf

Great! Now we have everything we need to deploy a fully functional IPAM solution. The final module creates a redundant bastion/mgt host using an autoscaling group again along with a config template to add a few administration tools to manage your MySQL RDS cluster and the NSOT cli tool to manage your application environment. Using a centralized management solution for any environment is recommended from a security perspective as it minimizes the attack surface of your application by limiting admin access to a dedicated management environment you control.

File: /modules/bastion/main.tf

This completes the series of articles to walk through the Terraform code to build a complete opensource IPAM solution in AWS. The video link below will guide you through cloning the Prometheus LLC repo as well as the terraform commands to run to deploy this entire project. Happy Hacking!