By: Ashley Hutson
Twitter: @asheliahut
Slides: https://github.com/asheliahut/terraform-for-developers-talk
echo $PATH
/this/is/going/to/be/long/help/me:/next/path/is/long:/why/does/path/never/end
echo $PATH
/this/is/going/to/be/long/help/me:/next/path/is/long:/why/does/path/never/end
HCL = Hasicorp Configuration Language
This is a layer on top of JSON
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
required_version = "~> 0.13"
backend "s3" {
bucket = "terraform-state-aak"
region = "us-east-1"
encrypt = "true"
dynamodb_table = "my-locking-table"
role_arn = "arn:aws:iam::111111111111:role/deploy-infrastructure"
key = "app.tfstate"
workspace_key_prefix = "env"
}
}
module "cloudwatch" {
source = "./modules/cloudwatch"
name = var.cloud_log_name
}
This is the starting point for all code to run
The terraform code will always start with a provider or variables defined they can share a same space but it is not prefered. Always separate variables from resources and modules.
This is where all base high level variables are defined
All variables used for your application as a whole are stored in this file. You may think though that this is a security issue because of secrets but there are ways around that. This file does not store computed values either.
Allows resources and variables to be accessed outside scope.
The outputs file houses a structure similar to that of variables but each have a type of output and the value references the resource or module from which the values come. This allows for pulling nested resources.
output "alb_name" {
value = "${module.service_alb.alb_name}"
}
This is storage of each piece of infrastructure broken down.
If creating a new AWS application with an RDS node and elasticache tied to your EC2 instance you would want 1 module for RDS, 1 module for elasticache, and 1 module for EC2.
This holds local run data of terraform.
This directory is vital and important for running terraform locally on a computer. If you choose to go further and integrate with a CI provider or hook up to a backend this becomes less of an important piece.
# Single Variable
variable "aws_region" {
description = "AWS Region"
default = "us-east-1"
}
# List type
variable "application_ports" {
type = list(object({
internal = number,
external = number,
protocol = string
}))
default = [
{
internal = 8080
external = 80
protocol = "tcp"
},
{
internal = 8081
external = 443
protocol = "tcp"
}
]
}
To Access the scope of a variable you just need to pull it from where it lives.
module "cloudwatch" {
source = "./modules/cloudwatch"
name = var.cloud_log_name
}
Change interpolation of variables based on conditional data
module "elasticache" {
source = "./modules/elasticache"
project_name = var.project_name
environment_name = var.env
engine_version = var.engine_version
instance_type = "${var.env == "production" ? var.prod_size : var.dev_size}"
}
In v0.11 this was used with many hacks many of these were fixed in v0.12
This was added in 0.13 that allowed to validate the variables used.
variable "cluster_min_size" {
description = "Mimimum number of running EC2 instances in the cluster."
type = number
default = 0
validation {
condition = var.cluster_min_size >= 0
error_message = "Value MUST be a positive integer."
}
}
This was added in 0.14 that allowed to obfuscate data in outputs.
variable "user_information" {
type = object({
name = string
address = string
})
sensitive = true
}
resource "some_resource""a" {
name = var.user_information.name
address = var.user_information.address
}
There are many built in terraform functions for computing and doing actions to variables.
These are the most common types of providers used when writing Terraform code.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
google = {
source = "hashicorp/google"
version = "3.65.0"
}
}
required_version = "~> 0.15"
}
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
provider "google" {
credentials = file("account.json")
project = var.gcp_project
region = var.aws_region
}
Providers can be found for infrastructure monitoring tools that are backed by APIs
This can be powerful to have a single place for all component needs for your application down to your source control
provider "datadog" {
api_key = var.datadog_api_key
app_key = var.datadog_app_key
}
# Configure the PagerDuty provider
provider "pagerduty" {
token = var.pagerduty_token
}
# Configure the GitHub Provider
provider "github" {
token = var.github_token
organization = var.github_organization
}
# Configure the Cloudflare provider
provider "cloudflare" {
email = var.cloudflare_email
token = var.cloudflare_token
}
There are people who will wrap other apis into providers so you can call. Remember if it has a JSON payload it is easily replicable in HCL.
provider "dominos" {
first_name = var.dom_first_name
last_name = var.dom_last_name
email_address = var.dom_email
phone_number = var.dom_phone
credit_card {
number = var.dom_card.num
cvv = var.dom_card.cvv
date = var.dom_card.date
zip = var.dom_card.zip
}
}
# External DB provider for big data
provider "snowflake" {
account = "..."
role = "..."
region = "..."
}
These are lists of resources and templates
resource "aws_db_parameter_group" "rds_parameter_group" {
name = ${var.name"-"var.environment"-rds-pg-11"}
family = "postgres11"
}
data "template_file" "event_rule" {
template = file("${path.module}/event-rule.json")
vars {
cluster_arn = aws_ecs_cluster.default.id
}
}
Include this as a bonus
Fill out a full definition of the module and how to use it in a file that would call the module.
How to call official modules
# Pull from terraform source and don't write own modules
module "ec2_cluster" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 2.0"
name = "my-cluster"
instance_count = 5
ami = "ami-ebd02392"
instance_type = "t2.micro"
key_name = "user1"
monitoring = true
vpc_security_group_ids = ["sg-12345678"]
subnet_id = "subnet-eddcdzz4"
tags = {
Terraform = "true"
Environment = "dev"
}
}
How to call custom modules
module "cool_thing" {
source = "git::git@github.com:enthought/terraform-modules.git/cool_thing?ref=v0.1.0"
var_one = "foo"
var_two = "bar"
}
resource "aws_instance" "web" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
}
resource "aws_iam_role" "lambda_iam_role" {
name = ${var.name"-"var.environment"-ecs-scale-role"}
assume_role_policy = data.aws_iam_policy_document.lambda_role_document.json
}
# Create a new monitor
resource "datadog_monitor" "default" {
# ...
}
# Create a new timeboard
resource "datadog_timeboard" "default" {
# ...
}
Stops resouce from being destroyed on destroy
This value can save you many times over by applying it to a database. Making sure your data is safe and secure.
This is important from stopping downtime
This is used heavily for replacing new EC2 Instances or Google apps this allows for new instances to be created or auto scaling groups and then the flip to the resource will occur before destroying the old one. One less problem during deploys!
If you want to manage something outside terraform
This is used when using a service that provides automatic updates for instance if you use automated updates for RDS you can ignore the engine_version so when you push the next time it will go through just fine without complaining the version isn't in sync.
This can be super important for slow APIs (aws cough cough)
There are many times throughout the day where cloud provider apis get overwhelmed and placing something like a new EC2 instance can take a long time this allows for the operation to wait it out longer before deeming the apply bad.
timeouts {
create = "60m"
delete = "2h"
}
}
[
{
"cpu": 10,
"essential": true,
"image": "datadog/agent",
"memoryReservation": 128,
"name": "dd-agent",
"portMappings": [
{
"containerPort": 8125,
"protocol": "udp"
}
],
"privileged": true,
"environment": [
{
"name": "DD_API_KEY",
"value": "${datadog_license}"
},
{
"name": "DD_DOCKER_LABELS_AS_TAGS",
"value": "true"
},
{
"name": "DD_PROCESS_AGENT_ENABLED",
"value": "true"
}
]
}
]
# Find the latest available AMI that is tagged with Component = web
data "aws_ami" "web" {
filter {
name = "state"
values = ["available"]
}
filter {
name = "tag:Component"
values = ["web", "myapp"]
}
most_recent = true
}
A "backend" in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.
These are placed at the top of your main.tf or in a seperate file called backend.tf
Make file secure!
terraform {
backend "consul" {
address = "demo.consul.io"
scheme = "https"
path = "example_app/terraform_state"
}
}
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
lifecycle {
create_before_destroy = true
}
*If using a proper deploy tooling for applications you may not want to deploy code with terraform
#!/bin/bash
set -o pipefail
set -e
# Move To terraform Directory
cd $( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
# Set Environment Variables that need to be calculated
source ./set-env.sh
# Log in to Elastic Container Service
eval `aws ecr get-login --region ${CCD_AWS_REGION} --no-include-email`
PHP_REPOSITORY=${CCD_AWS_ACCOUNT_ID}.dkr.ecr.${CCD_AWS_REGION}.amazonaws.com/app-name/php
# Build Container Images
docker build -t $PHP_REPOSITORY -f ../build/php/Dockerfile ../
# Tag Container Images && Push New Containers
docker tag $PHP_REPOSITORY $NODE_REPOSITORY:${APP_VERSION}
docker push $PHP_REPOSITORY:${APP_VERSION}
# install gcloud
gcloud auth configure-docker
# build your container
cd /path/to/dockerfile
docker build .
# [SOURCE_IMAGE] is the local image name or image ID
# [PROJECT-ID] is your GCP account project id
docker tag [SOURCE_IMAGE] gcr.io/[PROJECT-ID]/php-app
docker push gcr.io/[PROJECT-ID]/php-app
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.65.0"
}
}
backend "gcs" {
bucket = var.backend_bucket
prefix = var.backend_prefix
}
}
provider "google" {
credentials = sensitive(file("account.json"))
project = sensitive(var.project_id)
region = var.region
}
module "cloudrun" {
source = "./modules/cloudrun"
service_name = var.service_name
project_id = var.project_id
location = var.region
url = var.url
container_location = var.container_location
}
variable "backend_bucket" {
type = string
default = "app_backend"
description = "The backend bucket location"
}
variable "backend_prefix" {
type = string
default = "terraform/state/prod"
description = "The prefix inside the bucket to go down into allowing you to swap environments if you wanted."
}
variable "project_id" {
type = string
default = "project_id"
description = "The project id associated to the gcp account."
}
variable "region" {
type = string
default = "us-central1"
description = "The gcp region to set the application."
}
variable "service_name" {
type = string
default = "cool-php-app-service"
description = "The gcp name for the cloud run service."
}
variable "url" {
type = string
default = "https://URL_ASSOCIATED_TO_ACCOUNT.com"
description = "The url the application should run at."
}
variable "container_location" {
type = string
default = "gcr url"
description = "The gcr location of our applications docker container."
}
resource "google_cloud_run_service" "default" {
name = var.service_name
location = var.location
metadata {
namespace = var.project_id
}
spec {
containers {
image = var.container_location
}
}
}
# The Service is ready to be used when the "Ready" condition is True
# Due to Terraform and API limitations this is best accessed through a local variable
locals {
cloud_run_status = {
for cond in google_cloud_run_service.default.status[0].conditions : cond.type => cond.status
}
}
resource "google_cloud_run_domain_mapping" "default" {
location = var.location
name = var.url
metadata {
namespace = var.project_id
}
spec {
route_name = google_cloud_run_service.default.name
}
}
variable "project_id" {
type = string
description = "The project id associated to the gcp account."
}
variable "region" {
type = string
default = "us-central1"
description = "The gcp region to set the application."
}
variable "service_name" {
type = string
default = "cool-php-app-service"
description = "The gcp name for the cloud run service."
}
variable "url" {
type = string
default = "https://URL_ASSOCIATED_TO_ACCOUNT.com"
description = "The url the application should run at."
}
variable "container_location" {
type = string
default = "gcr url"
description = "The gcr location of our applications docker container."
}
output "isReady" {
value = local.cloud_run_status["Ready"] == "True"
}